STGN:一种实时广义轨迹规划的时空图网络

IF 6.4 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
Runjiao Bao;Yongkang Xu;Chenhao Wang;Tianwei Niu;Junzheng Wang;Shoukun Wang
{"title":"STGN:一种实时广义轨迹规划的时空图网络","authors":"Runjiao Bao;Yongkang Xu;Chenhao Wang;Tianwei Niu;Junzheng Wang;Shoukun Wang","doi":"10.1109/TASE.2025.3614472","DOIUrl":null,"url":null,"abstract":"In dynamic and unstructured environments, mobile robots need to generate safe and efficient trajectories in real time, which poses significant challenges due to the uncertainty of surrounding obstacles. To address this, this article presents a real-time obstacle avoidance trajectory planning method, built upon a spatio-temporal graph network that integrates temporal modeling with graph attention mechanisms. The proposed network captures both temporal dynamics and spatial structural dependencies in dynamic environments by integrating a temporal information module based on long short-term memory (LSTM) and a spatial module based on relational graph attention networks (RGAT). On the whole, the approach follows a two-phase pipeline. In the offline phase, a high-quality trajectory dataset is constructed to represent the heterogeneous state graph of the robot and surrounding obstacles. Then the dataset is used to train the spatio-temporal network, which learns to map environment-state graphs to optimal control commands. In the online phase, the trained network is deployed on the robot to perform real-time perception, decision-making, and control, forming a closed-loop trajectory optimization process. Extensive experiments in both simulated and real-world scenarios demonstrate that the proposed method achieves high-quality trajectory planning, robust obstacle avoidance, and fast generalization under multi-obstacle and sudden disturbance conditions, while maintaining low computational overhead. Note to Practitioners—This article addresses the generalization challenges commonly encountered in traditional supervised learning-based obstacle avoidance methods. We propose a trajectory planning framework that leverages a spatiotemporal graph network to model dynamic interactions between a robot and its surrounding obstacles. This approach enables robust and adaptable behavior in complex, changing environments by explicitly capturing both spatial and temporal dependencies. The system is implemented on a four-wheel steering (4WS) robot for experimental validation. However, its modular design ensures straightforward transferability to a wide range of mobility platforms. The proposed method requires only basic obstacle position information, readily available from standard onboard sensors such as LiDAR or radar, and does not depend on raw sensor inputs or semantic maps, making it highly suitable for real-world deployment. It can be easily integrated into existing perception–planning–control pipelines, and future extensions may focus on incorporating richer semantic information and expanding to more diverse obstacle types.","PeriodicalId":51060,"journal":{"name":"IEEE Transactions on Automation Science and Engineering","volume":"22 ","pages":"21897-21912"},"PeriodicalIF":6.4000,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"STGN: A Spatio-Temporal Graph Network for Real-Time and Generalizable Trajectory Planning\",\"authors\":\"Runjiao Bao;Yongkang Xu;Chenhao Wang;Tianwei Niu;Junzheng Wang;Shoukun Wang\",\"doi\":\"10.1109/TASE.2025.3614472\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In dynamic and unstructured environments, mobile robots need to generate safe and efficient trajectories in real time, which poses significant challenges due to the uncertainty of surrounding obstacles. To address this, this article presents a real-time obstacle avoidance trajectory planning method, built upon a spatio-temporal graph network that integrates temporal modeling with graph attention mechanisms. The proposed network captures both temporal dynamics and spatial structural dependencies in dynamic environments by integrating a temporal information module based on long short-term memory (LSTM) and a spatial module based on relational graph attention networks (RGAT). On the whole, the approach follows a two-phase pipeline. In the offline phase, a high-quality trajectory dataset is constructed to represent the heterogeneous state graph of the robot and surrounding obstacles. Then the dataset is used to train the spatio-temporal network, which learns to map environment-state graphs to optimal control commands. In the online phase, the trained network is deployed on the robot to perform real-time perception, decision-making, and control, forming a closed-loop trajectory optimization process. Extensive experiments in both simulated and real-world scenarios demonstrate that the proposed method achieves high-quality trajectory planning, robust obstacle avoidance, and fast generalization under multi-obstacle and sudden disturbance conditions, while maintaining low computational overhead. Note to Practitioners—This article addresses the generalization challenges commonly encountered in traditional supervised learning-based obstacle avoidance methods. We propose a trajectory planning framework that leverages a spatiotemporal graph network to model dynamic interactions between a robot and its surrounding obstacles. This approach enables robust and adaptable behavior in complex, changing environments by explicitly capturing both spatial and temporal dependencies. The system is implemented on a four-wheel steering (4WS) robot for experimental validation. However, its modular design ensures straightforward transferability to a wide range of mobility platforms. The proposed method requires only basic obstacle position information, readily available from standard onboard sensors such as LiDAR or radar, and does not depend on raw sensor inputs or semantic maps, making it highly suitable for real-world deployment. It can be easily integrated into existing perception–planning–control pipelines, and future extensions may focus on incorporating richer semantic information and expanding to more diverse obstacle types.\",\"PeriodicalId\":51060,\"journal\":{\"name\":\"IEEE Transactions on Automation Science and Engineering\",\"volume\":\"22 \",\"pages\":\"21897-21912\"},\"PeriodicalIF\":6.4000,\"publicationDate\":\"2025-09-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Automation Science and Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11180012/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Automation Science and Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11180012/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

在动态和非结构化环境中,移动机器人需要实时生成安全高效的运动轨迹,这给周围障碍物的不确定性带来了巨大的挑战。为了解决这个问题,本文提出了一种基于时空图网络的实时避障轨迹规划方法,该方法将时间建模与图注意机制相结合。该网络通过整合基于长短期记忆(LSTM)的时间信息模块和基于关系图注意网络(RGAT)的空间信息模块,捕捉动态环境中的时间动态和空间结构依赖关系。总体而言,该方法遵循两阶段管道。在离线阶段,构建高质量的轨迹数据集来表示机器人和周围障碍物的异构状态图。然后使用该数据集训练时空网络,该网络学习将环境状态图映射到最优控制命令。在在线阶段,将训练好的网络部署在机器人上进行实时感知、决策和控制,形成闭环轨迹优化过程。仿真和现实场景的大量实验表明,该方法在保持较低的计算开销的同时,在多障碍物和突发干扰条件下实现了高质量的轨迹规划、鲁棒避障和快速泛化。从业人员注意:本文解决了传统的基于监督学习的避障方法中经常遇到的泛化挑战。我们提出了一个轨迹规划框架,利用时空图网络来模拟机器人与其周围障碍物之间的动态相互作用。这种方法通过显式地捕获空间和时间依赖关系,在复杂的、不断变化的环境中实现健壮且适应性强的行为。该系统在一个四轮转向(4WS)机器人上实现,以进行实验验证。然而,它的模块化设计确保了广泛的移动平台的直接可转移性。所提出的方法只需要基本的障碍物位置信息,这些信息可以从标准的车载传感器(如LiDAR或雷达)中轻松获得,并且不依赖于原始传感器输入或语义图,因此非常适合实际部署。它可以很容易地集成到现有的感知-计划-控制管道中,未来的扩展可能会集中在整合更丰富的语义信息和扩展到更多样化的障碍类型上。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
STGN: A Spatio-Temporal Graph Network for Real-Time and Generalizable Trajectory Planning
In dynamic and unstructured environments, mobile robots need to generate safe and efficient trajectories in real time, which poses significant challenges due to the uncertainty of surrounding obstacles. To address this, this article presents a real-time obstacle avoidance trajectory planning method, built upon a spatio-temporal graph network that integrates temporal modeling with graph attention mechanisms. The proposed network captures both temporal dynamics and spatial structural dependencies in dynamic environments by integrating a temporal information module based on long short-term memory (LSTM) and a spatial module based on relational graph attention networks (RGAT). On the whole, the approach follows a two-phase pipeline. In the offline phase, a high-quality trajectory dataset is constructed to represent the heterogeneous state graph of the robot and surrounding obstacles. Then the dataset is used to train the spatio-temporal network, which learns to map environment-state graphs to optimal control commands. In the online phase, the trained network is deployed on the robot to perform real-time perception, decision-making, and control, forming a closed-loop trajectory optimization process. Extensive experiments in both simulated and real-world scenarios demonstrate that the proposed method achieves high-quality trajectory planning, robust obstacle avoidance, and fast generalization under multi-obstacle and sudden disturbance conditions, while maintaining low computational overhead. Note to Practitioners—This article addresses the generalization challenges commonly encountered in traditional supervised learning-based obstacle avoidance methods. We propose a trajectory planning framework that leverages a spatiotemporal graph network to model dynamic interactions between a robot and its surrounding obstacles. This approach enables robust and adaptable behavior in complex, changing environments by explicitly capturing both spatial and temporal dependencies. The system is implemented on a four-wheel steering (4WS) robot for experimental validation. However, its modular design ensures straightforward transferability to a wide range of mobility platforms. The proposed method requires only basic obstacle position information, readily available from standard onboard sensors such as LiDAR or radar, and does not depend on raw sensor inputs or semantic maps, making it highly suitable for real-world deployment. It can be easily integrated into existing perception–planning–control pipelines, and future extensions may focus on incorporating richer semantic information and expanding to more diverse obstacle types.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Automation Science and Engineering
IEEE Transactions on Automation Science and Engineering 工程技术-自动化与控制系统
CiteScore
12.50
自引率
14.30%
发文量
404
审稿时长
3.0 months
期刊介绍: The IEEE Transactions on Automation Science and Engineering (T-ASE) publishes fundamental papers on Automation, emphasizing scientific results that advance efficiency, quality, productivity, and reliability. T-ASE encourages interdisciplinary approaches from computer science, control systems, electrical engineering, mathematics, mechanical engineering, operations research, and other fields. T-ASE welcomes results relevant to industries such as agriculture, biotechnology, healthcare, home automation, maintenance, manufacturing, pharmaceuticals, retail, security, service, supply chains, and transportation. T-ASE addresses a research community willing to integrate knowledge across disciplines and industries. For this purpose, each paper includes a Note to Practitioners that summarizes how its results can be applied or how they might be extended to apply in practice.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信