Sudao He , Gang Zhao , Jun Chen , Shenghan Zhang , Dhanada Mishra , Matthew Ming-Fai Yuen
{"title":"Weakly-aligned cross-modal learning framework for subsurface defect segmentation on building façades using UAVs","authors":"Sudao He , Gang Zhao , Jun Chen , Shenghan Zhang , Dhanada Mishra , Matthew Ming-Fai Yuen","doi":"10.1016/j.autcon.2024.105946","DOIUrl":"10.1016/j.autcon.2024.105946","url":null,"abstract":"<div><div>Infrared (IR) thermography combined with Unmanned Aerial Vehicles (UAVs) offers an innovative approach for automated building façades inspections. However, extracting quantitative defect information from a single image poses a significant challenge. To address this, this paper introduces a Weakly-aligned Cross-modal Learning framework for subsurface defect segmentation using UAVs. This framework consists of two main components: the Multimodal Feature Description Network (MFDN) and the Prompt-aided Cross-modal Graph Learning (PCGL) algorithm. Initially, RGB–IR image pairs are processed by MFDN to extract feature descriptors for multi-modal alignment. The PCGL algorithm identifies visually critical areas through graph partitioning on a Wasserstein graph. These critical areas are transferred to the aligned IR image, and a Wasserstein Adjacency Graph (WAG) is constructed based on masked superpixel segmentation. Finally, the defects contours are pinpointed by detecting abnormal vertices of the WAG. The effectiveness is validated through controlled laboratory experiments and field applications on tiled façades.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"170 ","pages":"Article 105946"},"PeriodicalIF":9.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142939733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self-training method for structural crack detection using image blending-based domain mixing and mutual learning","authors":"Quang Du Nguyen , Huu-Tai Thai , Son Dong Nguyen","doi":"10.1016/j.autcon.2024.105892","DOIUrl":"10.1016/j.autcon.2024.105892","url":null,"abstract":"<div><div>Deep learning-based structural crack detection utilizing fully supervised methods requires laborious labeling of training data. Moreover, models trained on one dataset often experience significant performance drops when applied to others due to domain shifts prompted by diverse structures, materials, and environmental conditions. This paper addresses the issues by introducing a robust self-training domain adaptive segmentation (STDASeg) pipeline. STDASeg incorporates an image blending-based domain mixing module to minimize domain discrepancies. Additionally, STDASeg involves a two-stage self-training framework characterized by the mutual learning scheme between Convolutional Neural Networks and Transformers, effectively learning domain invariant features from the two domains. Comprehensive evaluations across three challenging cross-dataset crack detection scenarios highlight the superiority of STDASeg over traditional supervised training approaches and current state-of-the-art methods. These results confirm the stability of STDASeg, thus supporting more efficient infrastructure assessments.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"170 ","pages":"Article 105892"},"PeriodicalIF":9.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142788866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automated six-degree-of-freedom Stewart platform for heavy floor tiling","authors":"Siwei Chang , Zemin Lyu , Jinhua Chen , Tong Hu , Rui Feng , Haobo Liang","doi":"10.1016/j.autcon.2024.105932","DOIUrl":"10.1016/j.autcon.2024.105932","url":null,"abstract":"<div><div>While existing floor tiling robots provide automated tiling for small tiles, robots designed for large and heavy tiles are rare. This paper develops a six-degree-of-freedom Stewart platform-based floor tiling robot for automated tiling of heavy tiles. The key contributions of this paper are: 1) establishing mechanical and kinematic models for a parallel robot to enhance the payload capacity of existing floor tiling robots. 2) designing a dual-camera system for precise visual alignment by capturing tile corner points from a complete perspective. Experimental validation demonstrated the robot's ability to automatically tile heavy floor tiles, with highly synchronized motions. The dual camera system achieved angle and distance deviations within ±0.001° and 0.5 mm. Quantitative analysis using the Borg RPE scale and EMG signals validated a reduction in physical strain. This research provides a feasible solution for automating heavy floor tile installation, effectively mitigating physical fatigue while enhancing the tiling alignment precision.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"170 ","pages":"Article 105932"},"PeriodicalIF":9.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142816513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Structural design and fabrication of concrete reinforcement with layout optimisation and robotic filament winding","authors":"Robin Oval , John Orr , Paul Shepherd","doi":"10.1016/j.autcon.2024.105952","DOIUrl":"10.1016/j.autcon.2024.105952","url":null,"abstract":"<div><div>Reinforced concrete is a major contributor to the environmental impact of the construction industry, due not only to its cement content, but also its steel tensile reinforcement, estimated to represent around 40% of the material embodied carbon. Reinforcement has a significant contribution because of construction rationalisation, resulting in regular cages of steel bars, despite the availability of structural-optimisation algorithms and additive-manufacturing technologies. This paper fuses computational design and digital fabrication, to optimise the reinforcement layout of concrete structures, by designing with constrained layout optimisation of strut-and-tie models where the ties are produced with robotic filament winding. The methodology is presented, implemented in open-source code, and illustrated on beam and plate reinforcement applications. The numerical studies yield a discussion about parameter selection and constraint influence on material and construction efficiency trade-offs. Small-scale physical prototypes up to 50 cm <span><math><mo>×</mo></math></span> 50 cm provide a proof-of-concept.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"170 ","pages":"Article 105952"},"PeriodicalIF":9.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142939669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ranjith K. Soman , Karim Farghaly , Grant Mills , Jennifer Whyte
{"title":"Digital twin construction with a focus on human twin interfaces","authors":"Ranjith K. Soman , Karim Farghaly , Grant Mills , Jennifer Whyte","doi":"10.1016/j.autcon.2024.105924","DOIUrl":"10.1016/j.autcon.2024.105924","url":null,"abstract":"<div><div>Despite the growing emphasis on digital twins in construction, there is limited understanding of how to enable effective human interaction with these systems, limiting their potential to augment decision-making. This paper investigates the research question: “How can construction control rooms be utilized as digital twin interfaces to enhance the accuracy and efficiency of decision-making in the digital twin construction workflow?”. Design science research was used to develop a framework for human-digital twin interfaces, and it was evaluated in a real-world construction project. Findings reveal that control rooms can serve as dynamic interfaces within the digital twin ecosystem, improving coordination efficiency and decision-making accuracy. This finding is significant for practitioners and researchers, as it highlights the role of digital twin interfaces in augmenting decision-making. The paper opens avenues for future studies of human-digital twin interaction and machine learning in construction, such as imitation learning, codifying tacit knowledge, and new HCI paradigms.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"170 ","pages":"Article 105924"},"PeriodicalIF":9.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142939670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhengyi Chen , Changhao Song , Boyu Wang , Xingyu Tao , Xiao Zhang , Fangzhou Lin , Jack C.P. Cheng
{"title":"Automated reality capture for indoor inspection using BIM and a multi-sensor quadruped robot","authors":"Zhengyi Chen , Changhao Song , Boyu Wang , Xingyu Tao , Xiao Zhang , Fangzhou Lin , Jack C.P. Cheng","doi":"10.1016/j.autcon.2024.105930","DOIUrl":"10.1016/j.autcon.2024.105930","url":null,"abstract":"<div><div>This paper presents a real-time, cost-effective navigation and localization framework tailored for quadruped robot-based indoor inspections. A 4D Building Information Model is utilized to generate a navigation map, supporting robotic pose initialization and path planning. The framework integrates a cost-effective, multi-sensor SLAM system that combines inertial-corrected 2D laser scans with fused laser and visual-inertial SLAM. Additionally, a deep-learning-based object recognition model is trained for multi-dimensional reality capture, enhancing comprehensive indoor element inspection. Validated on a quadruped robot equipped with an RGB-D camera, IMU, and 2D LiDAR in an academic setting, the framework achieved collision-free navigation, reduced localization drift by 71.77 % compared to traditional SLAM methods, and provided accurate large-scale point cloud reconstruction with 0.119-m precision. Furthermore, the object detection model attained mean average precision scores of 73.7 % for 2D detection and 62.9 % for 3D detection.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"170 ","pages":"Article 105930"},"PeriodicalIF":9.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142867664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semantic navigation for automated robotic inspection and indoor environment quality monitoring","authors":"Difeng Hu, Vincent J.L. Gan","doi":"10.1016/j.autcon.2024.105949","DOIUrl":"10.1016/j.autcon.2024.105949","url":null,"abstract":"<div><div>Maintaining a comfortable indoor environment is essential for enhancing occupant well-being. However, traditional inspection methods rely on manual input of precise coordinates for target objects, limiting efficiency. This paper proposes a semantic navigation approach to improve robotic inspection intelligence and efficiency. A revised RandLA-Net and KNN algorithm construct a semantic map rich in detailed object information, supporting semantic navigation. Subsequently, an object instance reasoning algorithm automatically identifies and extracts target object coordinates from the semantic map using human-like language commands. Given the position information, a semantics-aware A* algorithm calculates safer, more efficient navigation paths through enhanced robot-environment interaction. Experiments demonstrate a position accuracy of ∼0.08 m for objects in the semantic map and effective coordinate extraction by the reasoning algorithm. The semantics-aware A* algorithm generates paths farther from obstacles and cluttered areas with less computational time, indicating its superior performance in terms of the robot's safety and inspection efficiency.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"170 ","pages":"Article 105949"},"PeriodicalIF":9.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142888278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hybrid deep learning model for accurate cost and schedule estimation in construction projects using sequential and non-sequential data","authors":"Min-Yuan Cheng, Quoc-Tuan Vu, Frederik Elly Gosal","doi":"10.1016/j.autcon.2024.105904","DOIUrl":"10.1016/j.autcon.2024.105904","url":null,"abstract":"<div><div>Accurate estimation of construction costs and schedules is crucial for optimizing project planning and resource allocation. Most current approaches utilize traditional statistical analysis and machine learning techniques to process the vast amounts of data regularly generated in construction environments. However, these approaches do not adequately capture the intricate patterns in either time-dependent or time-independent data. Thus, a hybrid deep learning model (NN-BiGRU), combining Neural Network (NN) for time-independent and Bidirectional Gated Recurrent Unit (BiGRU) for time-dependent, was developed in this paper to estimate the final cost and schedule to completion of projects. The Optical Microscope Algorithm (OMA) was used to fine-tune the NN-BiGRU model (OMA-NN-BiGRU). The proposed model earned Reference Index (RI) values of 0.977 for construction costs and 0.932 for completion schedules. These findings underscore the potential of the OMA-NN-BiGRU model to provide highly accurate predictions, enabling stakeholders to make informed decisions that promote project efficiency and overall success.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"170 ","pages":"Article 105904"},"PeriodicalIF":9.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142816520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhong Wang, Vicente A. González, Qipei Mei, Gaang Lee
{"title":"Sensor adoption in the construction industry: Barriers, opportunities, and strategies","authors":"Zhong Wang, Vicente A. González, Qipei Mei, Gaang Lee","doi":"10.1016/j.autcon.2024.105937","DOIUrl":"10.1016/j.autcon.2024.105937","url":null,"abstract":"<div><div>This paper examines the underutilization of sensors in the construction industry despite their significant potential for improving performance. A systematic review was conducted on research published between 2004 and 2024, identifying 11 key barriers such as the need for advanced skill sets and user-centric design, lack of standardized practices, and challenges in data networks and management. The study applied both quantitative descriptive analysis and qualitative content analysis to explore these barriers across five stages of sensor adoption. A total of 63 articles were thoroughly reviewed to identify thematic patterns and chronological trends. The findings highlight critical areas that require attention, including the development of standardized protocols, enhancing data-driven decision-making with advanced analytics, and fostering industry-wide training programs. Additionally, leveraging Lean Construction 4.0 principles is proposed to address these challenges. The insights from this research aim to support the construction industry in integrating sensor technologies more effectively, leading to greater efficiency and improved performance.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"170 ","pages":"Article 105937"},"PeriodicalIF":9.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142867674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongyu Zhao , Junbo Sun , Xiangyu Wang , Yufei Wang , Yang Su , Jun Wang , Li Wang
{"title":"Real-time and high-accuracy defect monitoring for 3D concrete printing using transformer networks","authors":"Hongyu Zhao , Junbo Sun , Xiangyu Wang , Yufei Wang , Yang Su , Jun Wang , Li Wang","doi":"10.1016/j.autcon.2024.105925","DOIUrl":"10.1016/j.autcon.2024.105925","url":null,"abstract":"<div><div>Defects and anomalies during the 3D concrete printing (3DCP) process significantly affect final construction quality. This paper proposes a real-time, high-accuracy method for monitoring defects in the printing process using a transformer-based detector. Despite limited data availability, deep learning-based data augmentation and image processing techniques were employed to enable effective training of this complex transformer model. A range of enhancement strategies was applied to the RT-DETR, resulting in remarkable improvements, including a mAP50 of 98.1 %, mAP50–95 of 68.0 %, and a computation speed of 72 FPS. The enhanced RT-DETR outperformed state-of-the-art detectors such as YOLOv8 and YOLOv7 in detecting defects in 3DCP. Furthermore, the improved RT-DETR was used to analyze the relationships between defect count, size, and printer parameters, providing guidance for operators to fine-tune printer settings and promptly address defects. This monitoring method reduces material waste and minimizes the risk of structural collapse during the printing process.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"170 ","pages":"Article 105925"},"PeriodicalIF":9.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142867683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}