Yuming Li , Zhihao Xu , Shufei Li , Zhaoyang Liao , Shuai Li , Xuefeng Zhou
{"title":"Robot compliance control framework for grinding thin-walled parts with unknown surface: Deformation and orientation adaptation","authors":"Yuming Li , Zhihao Xu , Shufei Li , Zhaoyang Liao , Shuai Li , Xuefeng Zhou","doi":"10.1016/j.rcim.2025.103147","DOIUrl":"10.1016/j.rcim.2025.103147","url":null,"abstract":"<div><div>In the context of intelligent manufacturing, robotic grinding emerges as a pivotal technique that holds profound significance in optimizing production processes, enhancing product quality, and driving the transformation towards a more intelligent manufacturing paradigm. Robotic grinding tasks face significant challenges due to dynamic deformed position, variable stiffness, and uncertain contours resulting from thin-walled parts uncertainties. In this paper, an online force-orientation-motion double-loop controller is proposed. In addition, for comparison purposes, the constant impedance control is also analyzed. The main advantage of the proposed method is that the grinding force is robust to the dynamic disturbances and environmental uncertainties. Compared with traditional control methods that rely on precise environmental modeling, the proposed method enhances adaptability in complex machining environments through robust control based on online system feedback. The experimental results verify the effectiveness of the proposed method in enhancing the grinding quality, improving the force control performance, and handling boundary constraints, demonstrating its suitability for applications involving thin-walled parts with unknown surface.</div></div>","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"98 ","pages":"Article 103147"},"PeriodicalIF":11.4,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145220970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yichen Wang , Shuai Zheng , Ze Yang , Yingnan Zhu , Sen Zhang , Jiewu Leng , Jun Hong
{"title":"Digital twin-empowered robotic arm manipulation with reinforcement learning: A comprehensive survey","authors":"Yichen Wang , Shuai Zheng , Ze Yang , Yingnan Zhu , Sen Zhang , Jiewu Leng , Jun Hong","doi":"10.1016/j.rcim.2025.103151","DOIUrl":"10.1016/j.rcim.2025.103151","url":null,"abstract":"<div><div>Recent decades have witnessed rapid development and increasing widespread applications of robotics across various industries. On one hand, the robotic arm, being the key component of robotics, has attracted the attention of scholars and experts with its application in quite a number of smart factory tasks. On the other hand, Digital Twin (DT), as an emerging virtual-physical bridging technique, offers significant advantages over testing robotic arm manipulation algorithms only within simulation environments. By facilitating the accurate validation of algorithms in real environments, DT provides a realistic basis for testing and optimizing their feasibility. This paper discusses the state-of-the-art of robotic arm intelligent manipulation related techniques empowered by DT and illustrates the picture for its future development. More specifically, it provides a novel perspective to analyze the entire workflow of DT-empowered robotic arm intelligent manipulation techniques, from task definition to path planning, simulation environment, and virtual-real communications, respectively. First, diverse robotic arm manipulation tasks, such as catching, picking & placing, and assembling are reviewed along with the methods of path planning and collision avoidance. Second, this paper discusses the evolution of various path planning algorithms for robotic arm manipulation, highlighting reinforcement learning methods such as Deep Q-learning and Proximal Policy Optimization approaches. Third, this paper reviews on the simulation environments containing Unity, MuJoCo, ROS, PyBullet and so on, in which different deep learning methods are implemented. Finally, recent developed robotic arm DT systems including some new Augmented Reality and Virtual Reality aided applications are analyzed. It is hoped that this study will provide valuable insights for DT-empowered robotic arm techniques and pave the way for further development of more advanced researches.</div></div>","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"98 ","pages":"Article 103151"},"PeriodicalIF":11.4,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145220973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aleksandar Stefanov, Miha Zorman, Sebastjan Šlajpah, Janez Podobnik, Matjaž Mihelj, Marko Munih
{"title":"From perception to precision: Vision-based mobile robotic manipulation for assembly screwdriving","authors":"Aleksandar Stefanov, Miha Zorman, Sebastjan Šlajpah, Janez Podobnik, Matjaž Mihelj, Marko Munih","doi":"10.1016/j.rcim.2025.103148","DOIUrl":"10.1016/j.rcim.2025.103148","url":null,"abstract":"<div><div>Flexible manufacturing demands automation that is both precise and adaptable. However, tasks such as screwdriving are typically automated using costly, rigid robotic cells, making this approach impractical for low-volume, high-mix production. As a scalable solution, mobile manipulators offer a flexible alternative, but achieving the required precision for screwdriving remains challenging due to localization uncertainties. This paper addresses these limitations by presenting a vision-guided mobile robotic manipulation system that performs high-precision screwdriving using only monocular RGB imagery. The proposed pipeline integrates stationary and onboard cameras with perception algorithms for object identification and segmentation, pose estimation, and CAD-based screw hole localization, compensating for base misalignment and object placement variability. Experimental validation using ISO 9283 standard’s metrics demonstrates a translational accuracy between 0.21 mm and 0.50 mm across multiple screw positions. Additionally, the system achieves angular estimation errors as low as 0.07°to 0.20°, verifying its capability for sub-degree precision in orientation estimation. In 50 independent experiments involving a total of 400 screw insertions, the system achieved a 100 % success rate, confirming its reliability in practical conditions. These results confirm the feasibility of using RGB-only vision for precision screwdriving and highlight the mobile manipulation system’s scalability for real-world semi-structured manufacturing environments.</div></div>","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"98 ","pages":"Article 103148"},"PeriodicalIF":11.4,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145220393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhengren Tong , Lai Xu , Xianglong Li , Chen Yang , Qinfeng Wang , Hongyao Shen
{"title":"Intelligent support-free additive manufacturing path planning via method library and neural network","authors":"Zhengren Tong , Lai Xu , Xianglong Li , Chen Yang , Qinfeng Wang , Hongyao Shen","doi":"10.1016/j.rcim.2025.103156","DOIUrl":"10.1016/j.rcim.2025.103156","url":null,"abstract":"<div><div>Support-free additive manufacturing achieves self-supporting fabrication by adjusting the platform’s posture, effectively reducing material waste and simplifying the post-processing stage. However, the diversity of industrial part geometries requires different approaches to plan the robotic manufacturing paths. Traditional approaches to selecting support-free manufacturing path planning methods rely heavily on expert knowledge. This paper proposes an intelligent path planning system based on a method library. The system utilizes a method library composed of seven approaches to achieve support-free additive manufacturing path planning of various types of parts. The additive manufacturing strategy matching neural network (AMMatcher) is employed to match the optimal path planning method from the method library to a given model and to identify its base surface. AMMatcher can analyze the multi-scale features of the model and leverage a cross-task attention mechanism to propagate classification features into the segmentation task, thereby improving network performance. A newly proposed support-free additive manufacturing model dataset (SFAMDataset) is used to evaluate the performance of AMMatcher and typical samples are validated through fabrication experiments on three different manufacturing platforms. Experimental results demonstrate that AMMatcher effectively identifies suitable manufacturing strategies for various model types and exhibits strong adaptability across different manufacturing platforms.</div></div>","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"98 ","pages":"Article 103156"},"PeriodicalIF":11.4,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145220967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simultaneous high transparency and robust stability-oriented Physical Human-Robot Interaction using an Interaction Intention Filter and a vibration observer","authors":"Junsheng Huang, Mingxing Yuan, Xuebo Zhang","doi":"10.1016/j.rcim.2025.103150","DOIUrl":"10.1016/j.rcim.2025.103150","url":null,"abstract":"<div><div>Physical Human-Robot Interaction (pHRI) systems equipped with compliant admittance controllers typically utilize F/T sensors to capture the forces applied by the operator. However, the impedance force feedback generated by the robot’s motion and the impedance of the human hand can significantly distort the intentional forces. This distortion can lead to vibrations that compromise both interaction transparency and stability. To address this issue, we propose a variable admittance control strategy that incorporates an Interaction Intention Filter (IIF) and an Enhanced Time-Domain Vibration Observer (ETDVO). We first introduce the concept of the IIF, which is designed based on a frequency-domain analysis of force signals collected from real-world human–robot cooperation tasks. This filter effectively prevents unintended impedance force feedback from being transmitted to the admittance controller. Moreover, to ensure interaction stability across diverse environments, we propose a variable-width time window-based ETDVO for accurately computing the vibration index. By leveraging this index, we introduce a variable admittance control strategy based on exponential mapping, which enables rapid adjustment of the admittance parameters, effectively suppresses vibrations and enhances stability. Finally, the proposed strategy is validated through human–robot cooperative laser tracking experiments conducted on a 7-DoF manipulator. Statistical results from the experiments demonstrate that our approach not only improves interaction transparency but also significantly enhances overall stability. Compared to the stable high-gain admittance controller, the <em>Task Time</em>, <em>Required Energy</em>, and <em>Mean Force</em> are reduced by over 10%, 54%, 58%, respectively.</div></div>","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"98 ","pages":"Article 103150"},"PeriodicalIF":11.4,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145220388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaoyu Guo , Bao Zhu , Meng Chi , Chen Liu , Yanding Wei , Qiang Fang
{"title":"Modeling and compensation of measurement errors in hand-eye system for heavy-load industrial robots with line laser sensor","authors":"Xiaoyu Guo , Bao Zhu , Meng Chi , Chen Liu , Yanding Wei , Qiang Fang","doi":"10.1016/j.rcim.2025.103155","DOIUrl":"10.1016/j.rcim.2025.103155","url":null,"abstract":"<div><div>During the continuous scanning process in which a heavy-load robot carries a line laser sensor, measurement accuracy is susceptible to the influence of both geometric errors and joint deformations. Traditional elastogeometric error compensation methods often rely heavily on the calibration accuracy of external measurement systems, which limits their flexibility and precision in on-site applications. To address this limitation, this study proposed Multi-Set Cohesive Calibration (MSCC), a method that eliminates the need for high-precision external system calibration before parameter identification. The MSCC integrated robot geometric errors, compliance errors, and extrinsic parameter errors into a unified error model, solving them collaboratively using multi-configuration measurement data, thereby enhancing the stability and adaptability of the calibration system. Furthermore, to address the high-dimensional and strongly coupled parameter identification problem, a three-stage hybrid optimization algorithm called the Exploration-Annealing-LM (EALM) algorithm was introduced to improve the convergence and global search capability during parameter estimation. The results demonstrated that, in online measurement applications for large structural components, the proposed method achieves an average measurement error of 0.0545 mm and a maximum error of 0.1296 mm, representing reductions of 84.36% and 78.31%, respectively, compared to the uncompensated case.</div></div>","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"98 ","pages":"Article 103155"},"PeriodicalIF":11.4,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145220389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A mixed reality-assisted scene-centric robot programming approach for human–robot collaborative manufacturing","authors":"Yue Yin , Junming Fan , Ang Liu , Pai Zheng","doi":"10.1016/j.rcim.2025.103146","DOIUrl":"10.1016/j.rcim.2025.103146","url":null,"abstract":"<div><div>While mass personalization manufacturing paradigm increasingly requires robots to handle complex and variable tasks, traditional robot-centric programming methods remain constrained by their expert-dependent nature and lack of adaptability. To address these limitations, this research proposes a scene-centric robot programming approach using MR-assisted interactive 3D segmentation, where operators naturally manipulate the digital twin (DT) of real-world objects to control the robot, rather than considering cumbersome end-effector programming. This framework combines Segment Anything Model (SAM) and 3D Gaussian Splatting (3DGS) for cost-effective, zero-shot, and flexible scene reconstruction and segmentation. Scale consistency and multi-coordinate calibration ensure seamless MR-driven interaction and robot execution. Finally, experimental results verify improved segmentation accuracy and computational efficiency, particularly in cluttered industrial environments, while case studies validate the method’s feasibility for real-world implementation. This research illustrates a promising human–robot collaborative manufacturing paradigm where virtual scene editing directly informs robot actions, demonstrating a novel MR-assisted interaction method beyond low-level robot movement control.</div></div>","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"98 ","pages":"Article 103146"},"PeriodicalIF":11.4,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145220968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pengfei Ding , Jie Zhang , Peng Zhang , Hongsen Li , Dexian Wang
{"title":"CCM-FCC: LLM-powered cognition-centered AI agent framework for proactive human-robot collaboration","authors":"Pengfei Ding , Jie Zhang , Peng Zhang , Hongsen Li , Dexian Wang","doi":"10.1016/j.rcim.2025.103145","DOIUrl":"10.1016/j.rcim.2025.103145","url":null,"abstract":"<div><div>Proactive human-robot collaboration (PHRC) primarily relies on predefined rule-based integration of perception, analysis and decision-making into a unified framework, which limits its autonomy and interactivity in dynamic scenarios such as disassembly and assembly. Although AI agent equipped with memory and interaction functions exhibits enhanced adaptability, their task-specific designs result in a lack of holistic cognition, thereby limiting their generalization capability. This paper proposes a Large Language Model (LLM)-powered cognition-centered AI agent framework, which addresses these challenges through the “Cognitive Core Management–Functional Cluster Collaboration” (CCM-FCC) paradigm. Specifically, to enhance the generalization capability of the AI agent, we developed a semantic Chain-of-Thought (CoT) prompt learning-driven cognitive core for predicting key task factors. The semantic CoT prompt learning, which couples task semantics with reasoning logic, empowers the pre-trained LLM to improve the key factors prediction. Subsequently, to ensure centralized management of the cognitive core, we designed a dual-dimensional feature-constrained functional activation module. It extracts task semantic cues from the key factors and autonomously activates functional modules within the AI agent, constrained by task complexity and operator state. Furthermore, a task-semantic-driven functional cluster collaboration module is proposed to generate the optimal collaboration strategy. Finally, a deep reinforcement learning model is constructed to enable the robot to proactively collaborate with the operator for PHRC. The experiments on HRC tasks demonstrates the effectiveness of the proposed method.</div></div>","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"98 ","pages":"Article 103145"},"PeriodicalIF":11.4,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145158950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hudaifah Hudaifah , Haitham Saleh , Anas Alghazi , Ahmet Kolus , Umar Alturki , Sami Elferik
{"title":"Human-aware scheduling for sustainable manufacturing: A review of dynamic job shop scheduling in the era of Industry 5.0","authors":"Hudaifah Hudaifah , Haitham Saleh , Anas Alghazi , Ahmet Kolus , Umar Alturki , Sami Elferik","doi":"10.1016/j.rcim.2025.103143","DOIUrl":"10.1016/j.rcim.2025.103143","url":null,"abstract":"<div><div>In the context of Industry 5.0, job scheduling must evolve beyond traditional efficiency-focused approaches to incorporate adaptability, sustainability, and human-centric approaches. Although Industry 4.0 technologies such as IoT, digital twins, and sensors have enabled real-time and dynamic-adaptive scheduling, most current systems still rely on static models and lack integrated consideration of environmental and human factors within dynamic scheduling contexts. To realize the vision of Industry 5.0 in practical applications, there is a growing need for dynamic scheduling methods that unify these dimensions. Given the limited research in this area, the present study proposes a comprehensive research framework for sustainable dynamic job scheduling, supported by structured conceptual models that explicitly outline how dynamic factors, environmental aspects, and human factors can be systematically incorporated into job scheduling problems. A systematic review of the literature is also conducted to assess recent progress and identify underexplored areas. The resulting framework is intended to provide a clear and structured foundation for future research aimed at developing intelligent, adaptive, eco-friendly, and human-aware scheduling systems aligned with the demands of Industry 5.0.</div></div>","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"98 ","pages":"Article 103143"},"PeriodicalIF":11.4,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145158951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yiyang Feng, Jianhui He, Jingbo Luo, Zaojun Fang, Chi Zhang, Guilin Yang
{"title":"A vision-based self-calibration method for industrial robots using variable pose constraints","authors":"Yiyang Feng, Jianhui He, Jingbo Luo, Zaojun Fang, Chi Zhang, Guilin Yang","doi":"10.1016/j.rcim.2025.103142","DOIUrl":"10.1016/j.rcim.2025.103142","url":null,"abstract":"<div><div>Among various geometrical constraints employed for robot self-calibration, the pose constraint by simultaneously restricting the position and orientation of the robot end-effector is the most comprehensive and effective constraint. However, as it is difficult to control the robot to precisely satisfy the pose constraints, a vision-based robot pose measurement system is designed, which mainly consists of two monochrome cameras fixed onto an adjustment stage and a pose target module mounted on the robot end-effector. Variable pose constraints are established when two or more robot poses are measured with two monochrome cameras at a fixed location. Based on the product-of-exponential (POE) formula, a new self-calibration model is formulated for industrial robots using variable pose constraint in which the robot pose errors are expressed in its tool frame and the position errors are decoupled from the orientation measurement errors. Therefore, the proposed self-calibration model is more accurate and robust than the conventional calibration model, in which the robot pose errors are expressed in its base frame and the position errors are coupled with the orientation measurement errors. Both simulations and experiments are conducted to validate the effectiveness of the proposed self-calibration method. Experimental results on the Aubo i5 robot demonstrate that after calibration, the average position error is reduced from 2.47 mm to 0.77 mm, and the average orientation error is reduced from 0.016 rad to 0.0039 rad.</div></div>","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"98 ","pages":"Article 103142"},"PeriodicalIF":11.4,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145118871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}