{"title":"Multiagent Reinforcement Learning Enhanced Decision-making of Crew Agents During Floor Construction Process","authors":"Bin Yang, Boda Liu, Yilong Han, Xin Meng, Yifan Wang, Hansi Yang, Jianzhuang Xia","doi":"arxiv-2409.01060","DOIUrl":null,"url":null,"abstract":"Fine-grained simulation of floor construction processes is essential for\nsupporting lean management and the integration of information technology.\nHowever, existing research does not adequately address the on-site\ndecision-making of constructors in selecting tasks and determining their\nsequence within the entire construction process. Moreover, decision-making\nframeworks from computer science and robotics are not directly applicable to\nconstruction scenarios. To facilitate intelligent simulation in construction,\nthis study introduces the Construction Markov Decision Process (CMDP). The\nprimary contribution of this CMDP framework lies in its construction knowledge\nin decision, observation modifications and policy design, enabling agents to\nperceive the construction state and follow policy guidance to evaluate and\nreach various range of targets for optimizing the planning of construction\nactivities. The CMDP is developed on the Unity platform, utilizing a two-stage\ntraining approach with the multi-agent proximal policy optimization algorithm.\nA case study demonstrates the effectiveness of this framework: the low-level\npolicy successfully simulates the construction process in continuous space,\nfacilitating policy testing and training focused on reducing conflicts and\nblockages among crews; and the high-level policy improving the spatio-temporal\nplanning of construction activities, generating construction patterns in\ndistinct phases, leading to the discovery of new construction insights.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computational Engineering, Finance, and Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.01060","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Fine-grained simulation of floor construction processes is essential for
supporting lean management and the integration of information technology.
However, existing research does not adequately address the on-site
decision-making of constructors in selecting tasks and determining their
sequence within the entire construction process. Moreover, decision-making
frameworks from computer science and robotics are not directly applicable to
construction scenarios. To facilitate intelligent simulation in construction,
this study introduces the Construction Markov Decision Process (CMDP). The
primary contribution of this CMDP framework lies in its construction knowledge
in decision, observation modifications and policy design, enabling agents to
perceive the construction state and follow policy guidance to evaluate and
reach various range of targets for optimizing the planning of construction
activities. The CMDP is developed on the Unity platform, utilizing a two-stage
training approach with the multi-agent proximal policy optimization algorithm.
A case study demonstrates the effectiveness of this framework: the low-level
policy successfully simulates the construction process in continuous space,
facilitating policy testing and training focused on reducing conflicts and
blockages among crews; and the high-level policy improving the spatio-temporal
planning of construction activities, generating construction patterns in
distinct phases, leading to the discovery of new construction insights.