Path Planning and Control of Building Robots Based on Reinforcement Learning Algorithm in Intelligent Construction

Rendong Jin
{"title":"Path Planning and Control of Building Robots Based on Reinforcement Learning Algorithm in Intelligent Construction","authors":"Rendong Jin","doi":"10.1016/j.procs.2025.04.255","DOIUrl":null,"url":null,"abstract":"<div><div>At present, the construction industry is actively promoting new construction methods such as intelligent and object-intensive construction. Among them, mobile operation robots, as one of the important solutions in the intelligent construction environment, involve key technologies such as obstacle avoidance, path planning, positioning, navigation, sensing and communication, and motion control and path planning problems are considered to be its most complex tasks. This paper studies the algorithm principle and optimization method based on reinforcement learning for the path planning and control problem of mobile operation robots in the intelligent construction environment. Reinforcement learning realizes strategy iteration through a reward and punishment mechanism, and can adaptively find the best course of action in an unfamiliar setting. Q-learning, as a classic algorithm, seeks to maximize long-term rewards by updating the value function. Nevertheless, the conventional Q-learning algorithm has issues with sparse rewards, sluggish convergence, and a propensity to enter local optimality when Q values are initialized. To this end, this paper introduces the artificial potential field method, uses gravitational and repulsive potential fields to optimize path planning, and improves the Q value function by integrating the reward mechanism of potential fields. The gravitational potential field attracts the robot to approach the target point, while the repulsive potential field avoids collision with obstacles. Experiments show that this method effectively solves the issue of path planning in intricate settings and offers a fresh concept for intelligent navigation of construction robots.</div></div>","PeriodicalId":20465,"journal":{"name":"Procedia Computer Science","volume":"261 ","pages":"Pages 637-646"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Procedia Computer Science","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1877050925013572","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

At present, the construction industry is actively promoting new construction methods such as intelligent and object-intensive construction. Among them, mobile operation robots, as one of the important solutions in the intelligent construction environment, involve key technologies such as obstacle avoidance, path planning, positioning, navigation, sensing and communication, and motion control and path planning problems are considered to be its most complex tasks. This paper studies the algorithm principle and optimization method based on reinforcement learning for the path planning and control problem of mobile operation robots in the intelligent construction environment. Reinforcement learning realizes strategy iteration through a reward and punishment mechanism, and can adaptively find the best course of action in an unfamiliar setting. Q-learning, as a classic algorithm, seeks to maximize long-term rewards by updating the value function. Nevertheless, the conventional Q-learning algorithm has issues with sparse rewards, sluggish convergence, and a propensity to enter local optimality when Q values are initialized. To this end, this paper introduces the artificial potential field method, uses gravitational and repulsive potential fields to optimize path planning, and improves the Q value function by integrating the reward mechanism of potential fields. The gravitational potential field attracts the robot to approach the target point, while the repulsive potential field avoids collision with obstacles. Experiments show that this method effectively solves the issue of path planning in intricate settings and offers a fresh concept for intelligent navigation of construction robots.
智能建筑中基于强化学习算法的建筑机器人路径规划与控制
目前,建筑行业正在积极推广智能化、对象密集型施工等新型施工方式。其中,移动作业机器人作为智能建筑环境的重要解决方案之一,涉及到避障、路径规划、定位、导航、传感和通信等关键技术,运动控制和路径规划问题被认为是其最复杂的任务。本文研究了智能建筑环境下移动作业机器人路径规划与控制问题的算法原理和基于强化学习的优化方法。强化学习通过奖惩机制实现策略迭代,能够在不熟悉的环境中自适应地找到最佳行动方案。Q-learning作为一种经典算法,通过更新价值函数来寻求长期回报最大化。然而,传统的Q-learning算法存在奖励稀疏、收敛缓慢以及在初始化Q值时倾向于进入局部最优的问题。为此,本文引入人工势场法,利用引力和斥力势场对路径规划进行优化,并通过整合势场奖励机制对Q值函数进行改进。重力势场吸引机器人向目标点靠近,斥力势场避免与障碍物碰撞。实验表明,该方法有效地解决了复杂环境下的路径规划问题,为建筑机器人的智能导航提供了新的思路。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.50
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信