Mingkai Li , Boyu Wang , Xingyu Tao , Zhengyi Chen , Jack C.P. Cheng , Zinan Wu
{"title":"Automatic clash avoidance in steel reinforcement design using explainable graph neural networks and rebar embedding learning","authors":"Mingkai Li , Boyu Wang , Xingyu Tao , Zhengyi Chen , Jack C.P. Cheng , Zinan Wu","doi":"10.1016/j.autcon.2025.106161","DOIUrl":"10.1016/j.autcon.2025.106161","url":null,"abstract":"<div><div>Steel reinforcement design is essential for the structural integrity and durability of reinforced concrete (RC) structures. However, rebar clashes frequently occur due to conventional design processes lacking precise bar positioning, leading to time-consuming and error-prone onsite modifications. Existing 3D analysis tools for clash detection are unsuitable for rebar design, which must comply with structural analysis and regional specifications. Therefore, this paper proposes an automatic and proactive rebar clash avoidance approach using graph neural networks (GNN) and rebar embedding learning. Vector and graph representations are introduced to model clash scenarios, while a GNN-based diagnosis framework detects clashes and classifies them as solvable or unsolvable. For unsolvable clashes, explainable GNN identifies the underlying causes, while Rebar2Vec generates optimal design alternatives to improve feasibility. Solvable clashes are resolved using multi-objective optimization, ensuring compliance with building codes. Experimental results demonstrate the approach's effectiveness in generating clash-free rebar layouts at the design stage.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"175 ","pages":"Article 106161"},"PeriodicalIF":9.6,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143820761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongyang Li , Shuying Fang , Tingting Shi , Ned Wales , Martin Skitmore
{"title":"Enhancing stakeholder engagement and performance evaluation in building design using BIM and VR","authors":"Hongyang Li , Shuying Fang , Tingting Shi , Ned Wales , Martin Skitmore","doi":"10.1016/j.autcon.2025.106191","DOIUrl":"10.1016/j.autcon.2025.106191","url":null,"abstract":"<div><div>“VR + BIM” technologies address the growing complexity of construction evaluation systems driven by digital transformation, as traditional methods struggle to meet diverse stakeholder demands. This paper develops an automated evaluation framework integrating Building Information Modeling (BIM) and Virtual Reality (VR) to enhance decision-making processes and enable immersive multi-stakeholder collaboration and data-driven assessments. To verify system effectiveness, a “green + healthy” building scenario under sustainable construction trends is selected. Analysis based on a cloud model of 18 performance indicators reveals superior performance in user experience indicators such as “light environment”, “spatial privacy” but limitations in resource management indicators like “water conservation” and “source control.” The overall system performance transitions from “average” to “relatively good,” confirming automated assessment feasibility. The research contributes a quantitative methodology for BIM-VR integration, enhancing stakeholder engagement and building performance evaluation, offering micro-level design recommendations and macro-level policy interventions for “green + healthy” buildings, advancing digital, intelligent construction practices.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"175 ","pages":"Article 106191"},"PeriodicalIF":9.6,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143820680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chunmo Zheng , Saika Wong , Xing Su , Yinqiu Tang , Ahsan Nawaz , Mohamad Kassem
{"title":"Automating construction contract review using knowledge graph-enhanced large language models","authors":"Chunmo Zheng , Saika Wong , Xing Su , Yinqiu Tang , Ahsan Nawaz , Mohamad Kassem","doi":"10.1016/j.autcon.2025.106179","DOIUrl":"10.1016/j.autcon.2025.106179","url":null,"abstract":"<div><div>An effective and efficient review of construction contracts is essential for minimizing construction projects losses, but current methods are time-consuming and error-prone. Studies using methods based on Natural Language Processing (NLP) exist, but their scope is often limited to text classification or segmented label prediction. This paper investigates whether integrating Large Language Models (LLMs) and Knowledge Graphs (KGs) can enhance the accuracy and interpretability of automated contract risk identification. A tuning-free approach is proposed that integrates LLMs with a Nested Contract Knowledge Graph (NCKG) using a Graph Retrieval-Augmented Generation (GraphRAG) framework for contract knowledge retrieval and reasoning. Tested on international EPC contracts, the method achieves more accurate risk evaluation and interpretable risk summaries than baseline models. These findings demonstrate the potential of combining LLMs and KGs for reliable reasoning in tasks that are knowledge-intensive and specialized, such as contract review.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"175 ","pages":"Article 106179"},"PeriodicalIF":9.6,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143820681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Temporal defect point localization in pipe CCTV Videos with Transformers","authors":"Zhu Huang , Gang Pan , Chao Kang , YaoZhi Lv","doi":"10.1016/j.autcon.2025.106160","DOIUrl":"10.1016/j.autcon.2025.106160","url":null,"abstract":"<div><div>During the inspection and maintenance of underground pipe systems, technicians often spend considerable time searching for subtle defects in inspection videos captured under varying pipe conditions using Closed-Circuit Television (CCTV). The lack of feature extractors tailored for pipe images, combined with the complexity of pipe CCTV videos, poses substantial challenges to the performance and applicability of conventional frame-by-frame, image-based localization algorithms. To address these challenges, this paper introduces PipeTR, a transformer-driven, end-to-end network, offering enhanced insights into pipe CCTV video analysis by shifting from a frame-based to a video-based approach. The development of PipeTR aims to assist technicians by automating the most time-consuming step of the assessment, thereby improving both efficiency and accuracy. Experiments demonstrate that PipeTR outperforms other image-based, frame-by-frame analysis methods on real-world CCTV pipe inspection video datasets, achieving an average F1 score of 43.04%, which is a 5.35% improvement over the current state-of-the-art methods.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"175 ","pages":"Article 106160"},"PeriodicalIF":9.6,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143820760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing bridge inspection data quality using machine learning","authors":"Chenhong Zhang , Xiaoming Lei , Ye Xia","doi":"10.1016/j.autcon.2025.106182","DOIUrl":"10.1016/j.autcon.2025.106182","url":null,"abstract":"<div><div>Bridge condition assessment is often compromised by errors in inspection data, limiting reliable maintenance and management decisions. This paper investigates how to enhance inspection data quality by automatically identifying and correcting the inaccurate assessment of structural conditions. A model that integrates textual and quantitative features is proposed to identify defect and condition ratings through defect descriptions, with corresponding dynamic partitioning strategy to detect ambiguous data, and a down-sampling and bagging ensemble to address class imbalance. Validated with ten years of real inspection data from 464 bridges, results show 98 % accuracy in correcting condition scores and 100 % accuracy in condition-level identification. These findings underscore the method's potential to improve the reliability of condition assessment and strengthen bridge management decision-making. Future research can focus on refining condition level identification algorithms for severely deteriorated structures.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"175 ","pages":"Article 106182"},"PeriodicalIF":9.6,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143820758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance prediction and sensitivity analysis of tunnel boring machine in various geological conditions using an ensemble extreme learning machine","authors":"Lianhui Jia , Lijie Jiang , Yongliang Wen , Jiulin Wu , Heng Wang","doi":"10.1016/j.autcon.2025.106169","DOIUrl":"10.1016/j.autcon.2025.106169","url":null,"abstract":"<div><div>The selection of data modelling methods in the data-driven performance prediction of tunnel boring machines is a challenge since each method has its own advantages and disadvantages compared with each other. Extreme learning machine (ELM) exhibits the benefits of fast learning speed, better scalability, and generalization performance, and is easy to convert between neural networks-based and kernel function-based methods. Thus, this paper proposes an ensemble extreme learning machine model for the performance prediction of tunnel boring machines, aiming to take respective advantage of different ELM models. The proposed model is validated through six in-situ datasets of a tunnel boring machine with different geological conditions, showing that it can produce accurate dynamic and statistical performance prediction results (average error of 3.12 %). The sensitivity analysis results show that the sensitivities are mainly distributed on the parameters of driving system and chamber system when the excavation face is occupied by a single geological layer.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"175 ","pages":"Article 106169"},"PeriodicalIF":9.6,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143808025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sangyoon Yun , Sungkook Hong , Sungjoo Hwang , Dongmin Lee , Hyunsoo Kim
{"title":"Analysis of masonry work activity recognition accuracy using a spatiotemporal graph convolutional network across different camera angles","authors":"Sangyoon Yun , Sungkook Hong , Sungjoo Hwang , Dongmin Lee , Hyunsoo Kim","doi":"10.1016/j.autcon.2025.106178","DOIUrl":"10.1016/j.autcon.2025.106178","url":null,"abstract":"<div><div>Human activity recognition (HAR) in construction has gained attention for its potential to improve safety and productivity. While HAR research has shifted toward vision-based approaches, many studies typically use data from a specific angle, limiting understanding of how camera angles affect accuracy. This paper addresses this gap by using AlphaPose and Spatial-Temporal Graph Convolutional Network (ST-GCN) algorithms to analyze the impact of various camera angles on HAR accuracy in masonry work. Data was collected from seven angles (0° to 180°), with the frontal view only used for training. Results showed consistently high recognition accuracy (>80 %) for side views, while accuracy decreased as the camera shifted toward rear views, especially from directly behind due to occlusion. By quantifying HAR accuracy across angles, this study provides baseline data for predicting performance from various camera positions, improving camera placement strategies and enhancing monitoring system effectiveness on construction sites.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"175 ","pages":"Article 106178"},"PeriodicalIF":9.6,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143808028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pablo Araya-Santelices , Zacarías Grande , Edison Atencio , José Antonio Lozano-Galant
{"title":"Bridge management with AI, UAVs, and BIM","authors":"Pablo Araya-Santelices , Zacarías Grande , Edison Atencio , José Antonio Lozano-Galant","doi":"10.1016/j.autcon.2025.106170","DOIUrl":"10.1016/j.autcon.2025.106170","url":null,"abstract":"<div><div>Artificial intelligence (AI) has significantly advanced infrastructure monitoring, particularly through machine learning and deep learning techniques. In bridge management, combining AI with Building Information Modeling (BIM) and unmanned aerial vehicles (UAVs) enhances accuracy, efficiency, and safety. This paper reviews AI, UAV, and BIM applications, focusing on technology integration and algorithm performance. A systematic literature review using the PRISMA framework analyzed 4436 papers from Scopus and Web of Science. Findings indicate that AI is mainly applied to damage detection, primarily through Convolutional Neural Networks (CNNs), while UAVs provide high-resolution imaging, and BIM serves as a platform for data storage and visualization. Key challenges include the lack of standardized datasets, limited automation in decision-making, and weak interoperability among these technologies. Future research should focus on dataset availability, hybrid AI models, and integrated automation strategies. This review highlights key areas to enhance AI-based bridge management.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"175 ","pages":"Article 106170"},"PeriodicalIF":9.6,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143808029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Video-based evaluation of bolt loosening in steel bridges using multi-frame spatiotemporal feature correlation","authors":"Baoxian Wang , Tao Wu , Weigang Zhao , Yilin Wu","doi":"10.1016/j.autcon.2025.106173","DOIUrl":"10.1016/j.autcon.2025.106173","url":null,"abstract":"<div><div>Bolt loosening in steel bridges poses critical safety risks. This paper proposes a multi-view spatiotemporal framework to assess bolt loosening. First, YOLO detects gusset plates and bolts, with spatiotemporal correlation model extracting region-specific bolt video clips. An enhanced UNet architecture quantifies bolt shadow areas as loosening indicators. To address single-frame feature limitations, phase correlation aligns multi-frame shadow regions, deriving enriched multi-perspective features. Finally, features are normalized within each gusset plate region, enabling probabilistic neural network to determine loosening severity. The framework was validated via a steel bridge semi-physical model under diverse conditions (imaging distances, backgrounds, illumination). Results confirm its reliability in delivering robust evaluations despite environmental variability.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"175 ","pages":"Article 106173"},"PeriodicalIF":9.6,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143808026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liu Yipeng , Wang Junwu , Mehran Eskandari Torbaghan
{"title":"Data-driven safety management of worker-equipment interactions using visual relationship detection and semantic analysis","authors":"Liu Yipeng , Wang Junwu , Mehran Eskandari Torbaghan","doi":"10.1016/j.autcon.2025.106181","DOIUrl":"10.1016/j.autcon.2025.106181","url":null,"abstract":"<div><div>Existing technologies struggle to accurately identify interactions between workers and equipment, as well as the deep semantics of complex construction scenes. To address these limitations, this paper proposes an automated construction site safety management system designed to enhance scene understanding and identify safety hazards while focusing on hazard-area and personal protective equipment (PPE) interaction. The system transforms image information into worker-centric triplets and generates precise textual descriptions through semantic enhancement, enabling effective scene analysis. By comparing the generated descriptions with predefined hazard statements, the system identifies potential risks. Experimental results demonstrate a 9.6 % improvement in recall for Ng-mR@K metrics (K = 20, 50, 100). Additionally, the system successfully filters over 90 % of invalid relationships, achieving 83.7 % accuracy in semantic similarity matching, significantly enhancing detection precision and semantic understanding. By advancing from object detection to a structured image-to-triplet-to-text framework, this paper offers an efficient and reliable solution for automated construction site safety management.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"175 ","pages":"Article 106181"},"PeriodicalIF":9.6,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143808024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}