Hancheng Zhang, Yuanyuan Hu, Qiang Wang, Zhendong Qian, Pengfei Liu
{"title":"End‐to‐end frequency enhancement framework for GPR images using domain‐adaptive generative adversarial networks","authors":"Hancheng Zhang, Yuanyuan Hu, Qiang Wang, Zhendong Qian, Pengfei Liu","doi":"10.1111/mice.13525","DOIUrl":"https://doi.org/10.1111/mice.13525","url":null,"abstract":"Ground‐penetrating radar (GPR) offers nondestructive subsurface imaging but suffers from a trade‐off between frequency and penetration depth: High frequencies yield better resolution with limited depth, while low frequencies penetrate deeper with reduced detail. This paper introduces a novel frequency enhancement method for GPR images using domain‐adaptive generative adversarial networks. The proposed end‐to‐end framework integrates a Domain Adaptation Module (DAM) and a Frequency Enhancement Module (FEM) to address frequency‐resolution trade‐offs and domain discrepancies. The DAM aligns simulated and real low‐frequency GPR data, enabling effective frequency enhancement by the FEM. Due to inherent differences in signal characteristics between simulated and real‐world GPR data, directly applying models trained on simulated data to real‐world scenarios often results in performance degradation and loss of physical consistency, making domain adaptation essential for bridging this gap. By reducing domain discrepancies and ensuring feature consistency, the framework generates high‐frequency GPR images with enhanced clarity and detail. Extensive experiments show that the method significantly improves image quality, target detection, and localization accuracy, outperforming state‐of‐the‐art approaches and demonstrating strong potential for subsurface imaging applications.","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"49 1","pages":""},"PeriodicalIF":11.775,"publicationDate":"2025-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144176883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Early detection and location of unexpected events in buried pipelines under unseen conditions using the two‐stream global fusion classifier model","authors":"Sun‐Ho Lee, Choon‐Su Park, Dong‐Jin Yoon","doi":"10.1111/mice.13507","DOIUrl":"https://doi.org/10.1111/mice.13507","url":null,"abstract":"Failure of buried pipelines can result in serious impacts, such as explosions, environmental contamination, and economic losses. Early detection and location of unexpected events is crucial to prevent such events. However, conventional monitoring methods exhibit limited generalization performance under varying environmental and operational conditions. Furthermore, the cross‐correlation‐based time difference of arrival approach, which is widely used for source localization, also lacks the capability to identify anomalous events. This study introduces what is termed as the two‐stream global fusion classifier (TSGFC), a novel multitask deep‐learning model designed to early detection and location of unexpected events in buried pipelines, even under previously unseen conditions. TSGFC combines spatial and temporal features from accelerometer data using a global fusion mechanism, and uniquely performs both event classification and source localization through a unified multitask framework. To ensure generalization across diverse environments, we employed a unique data acquisition strategy that was specifically designed to evaluate the model's performance under domain shift by using training data from controlled experiments and test data from real‐world excavation activities conducted on a completely different pipeline. Our results confirm that TSGFC can identify unexpected excavation activity with 95.45% accuracy and minimal false alarms, even when evaluated on data collected from a completely different buried pipeline under real‐world excavation scenarios unseen during training.","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"57 1","pages":""},"PeriodicalIF":11.775,"publicationDate":"2025-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144176884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yu Li, David Zhang, Penghao Dong, Shanshan Yao, Ruwen Qin
{"title":"A surface electromyography–based deep learning model for guiding semi‐autonomous drones in road infrastructure inspection","authors":"Yu Li, David Zhang, Penghao Dong, Shanshan Yao, Ruwen Qin","doi":"10.1111/mice.13520","DOIUrl":"https://doi.org/10.1111/mice.13520","url":null,"abstract":"While semi‐autonomous drones are increasingly used for road infrastructure inspection, their insufficient ability to independently handle complex scenarios beyond initial job planning hinders their full potential. To address this, the paper proposes a human–drone collaborative inspection approach leveraging flexible surface electromyography (sEMG) for conveying inspectors' speech guidance to intelligent drones. Specifically, this paper contributes a new data set, sEMG Commands for Piloting Drones (sCPD), and an sEMG‐based Cross‐subject Classification Network (sXCNet), for both command keyword recognition and inspector identification. sXCNet acquires the desired functions and performance through a synergetic effort of sEMG signal processing, spatial‐temporal‐frequency deep feature extraction, and multitasking‐enabled cross‐subject representation learning. The cross‐subject design permits deploying one unified model across all authorized inspectors, eliminating the need for subject‐dependent models tailored to individual users. sXCNet achieves notable classification accuracies of 98.1% on the sCPD data set and 86.1% on the public Ninapro db1 data set, demonstrating strong potential for advancing sEMG‐enabled human–drone collaboration in road infrastructure inspection.","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"1 1","pages":""},"PeriodicalIF":11.775,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144165421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep learning for computer vision in pulse‐like ground motion identification","authors":"Lu Han, Zhengru Tao","doi":"10.1111/mice.13521","DOIUrl":"https://doi.org/10.1111/mice.13521","url":null,"abstract":"Near‐fault pulse‐like ground motions can cause severe damage to long‐period engineering structures. A rapid and accurate identification method is essential for seismic design. Deep learning offers a solution by framing pulse‐like motion identification as an image classification task. However, the application of deep learning models faces multiple challenges from data and models for pulse‐like motion classification. This study focuses on suitable input images and model architecture optimization through a comprehensive strategy. The diverse datasets are realized by transferring the original time history into Morlet wavelet time‐frequency diagram, anomaly‐marked velocity time history, Fourier amplitude spectrum and its smoothed diagram, and pixel fusion diagrams. Two types of deep learning models are constructed in the image classification task for these datasets. A convolutional neural network (CNN) is enhanced by integrating the self‐attention mechanism (SAM) to concentrate on local image features. Additionally, a seismic parameter layer is added to this enhanced model to reduce reliance on input data features. Visual Transformers, including Vision Transformer (ViT) and Swin Transformer (SwinT), are adopted in this task as well. The results of the enhanced CNN demonstrate that TF outperforms other images with higher classification accuracy and convergence speed, and dual‐input image presents inferior performance. The accuracy of all input datasets under the constraint of a single‐parameter moment magnitude (<jats:italic>M</jats:italic><jats:sub>w</jats:sub>) is higher than that under the constraint of rupture distance (<jats:italic>R</jats:italic><jats:sub>rup</jats:sub>). The accuracy under the two‐parameter constraint of <jats:italic>M</jats:italic><jats:sub>w</jats:sub> and <jats:italic>R</jats:italic><jats:sub>rup</jats:sub> is higher than that of the single parameter constraint for all input datasets, in which the accuracy from TF is the highest, and that from dual‐input data is improved. The performance of SwinT is similar to CNN+SAM and better than ViT for single‐input images, in which TF presents the highest accuracy. For dual‐input images, ViT is better than SwinT, and both of them are better than CNN+SAM. In a resource‐limited environment, the enhanced CNN with single‐input TF is the best strategy, and the physical constraint of <jats:italic>M</jats:italic><jats:sub>w</jats:sub> and <jats:italic>R</jats:italic><jats:sub>rup</jats:sub> is more effective, especially for the dual‐input images.","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"50 1","pages":""},"PeriodicalIF":11.775,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144165420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Wang, Y. Wang, W. Li, A. B. Subramaniyan, G. Zhang
{"title":"Learning error distribution kernel‐enhanced neural network methodology for multi‐intersection signal control optimization","authors":"H. Wang, Y. Wang, W. Li, A. B. Subramaniyan, G. Zhang","doi":"10.1111/mice.13522","DOIUrl":"https://doi.org/10.1111/mice.13522","url":null,"abstract":"Traffic congestion has substantially induced significant mobility and energy inefficiency. Many research challenges are identified in traffic signal control and management associated with artificial intelligence (AI)‐based models. For example, developing AI‐driven dynamic traffic system models that accurately capture high‐resolution traffic attributes and formulate robust control algorithms for traffic signal optimization is difficult. Additionally, uncertainties in traffic system modeling and control processes can further complicate traffic signal system controllability. To partially address these challenges, this study presents a novel, hybrid neural network model enhanced with a probability density function kernel shaping technique to formulate traffic system dynamics better and improve comprehensive traffic network modeling and control. The numerical experimental tests were conducted, and the results demonstrate that the proposed control approach outperforms the baseline control strategies and reduces overall average delays by 11.64% on average. By leveraging the capabilities of this innovative model, this study aims to address major challenges related to traffic congestion and energy inefficiency toward more effective and adaptable AI‐based traffic control systems.","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"24 1","pages":""},"PeriodicalIF":11.775,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144165423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Machine learning models for predicting the International Roughness Index of asphalt concrete overlays on Portland cement concrete pavements","authors":"K. Kwon, Y. Yeom, Y. J. Shin, A. Bae, H. Choi","doi":"10.1111/mice.13524","DOIUrl":"https://doi.org/10.1111/mice.13524","url":null,"abstract":"Although estimating the International Roughness Index (IRI) is crucial, previous studies have faced challenges in addressing IRI prediction for asphalt concrete (AC) overlays on Portland cement concrete (PCC) pavements. This study introduces machine learning to predict the IRI of AC overlays on PCC pavements, focusing on incorporating pre‐overlay treatments to reflect their composite characteristics. These treatments are categorized into concrete pavement restoration (CPR) and fracturing methods. The developed models outperformed conventional approaches by effectively capturing the impact of these pre‐overlay treatments, as evidenced by the distinct differences in their contributions to IRI predictions between the CPR and fracturing methods. Additionally, the types and occurrences of pavement distresses varied depending on the pre‐overlay treatments applied. When separate IRI prediction models were developed for each treatment group, they demonstrated improved performance, compared to the original model that combined all treatments. This demonstrates the significance of individualized modeling based on specific pre‐overlay treatment types.","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"9 1","pages":""},"PeriodicalIF":11.775,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144165424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive feature expansion and fusion model for precast component segmentation","authors":"Ka‐Veng Yuen, Guanting Ye","doi":"10.1111/mice.13523","DOIUrl":"https://doi.org/10.1111/mice.13523","url":null,"abstract":"The assembly and production of sandwich panels for prefabricated components is crucial for the safety of modular construction. Although computer vision has been widely applied in production quality and safety monitoring, the large‐scale differences among components and numerous background interference factors in sandwich panel prefabricated components pose substantial challenges. Therefore, maintaining the model recognition accuracy remains a big challenge in practical circumstances. This paper presents an instance segmentation model, namely adaptive feature expansion and fusion (AFFS). The proposed model includes a dynamic feature aggregation mechanism and possesses a flattened network architecture, enabling efficient feature processing and precise instance segmentation. Moreover, AFFS supports rapid adaptation to newly added data or component categories by updating only the feature extraction layers. Comprehensive experimental evaluations demonstrate that the proposed AFFS achieves outstanding recognition accuracy (mAP<jats:sub>50</jats:sub> reaching 95.8% and mAP<jats:sub>min</jats:sub> reaching 99.9%), significantly outperforming several state‐of‐the‐art instance segmentation networks, including You Only Look Once (YOLO), Segmenting Objects by Locations v2 (SOLOv2), and Cascade Mask Region‐based Convolutional Neural Network (Cascade Mask R‐CNN).","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"23 1","pages":""},"PeriodicalIF":11.775,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144136740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self‐supervised domain adaptive approach for extrapolated crack segmentation with fine‐tuned inpainting generative model","authors":"Seungbo Shim","doi":"10.1111/mice.13517","DOIUrl":"https://doi.org/10.1111/mice.13517","url":null,"abstract":"The number and proportion of aging infrastructures are increasing, thereby necessitating accurate inspection to ensure safety and structural stability. While computer vision and deep learning have been widely applied to concrete cracks, domain shift issues often result in the poor performance of pretrained models at new sites. To address this, a self‐supervised domain adaptation method using generative artificial intelligence based on inpainting is proposed. This approach generates site‐specific crack images and labels by fine‐tuning Stable Diffusion model with DreamBooth. The resulting data set is then used to train a crack detection neural network using self‐supervised learning. Evaluations across two target domain data sets and eight models show average F1‐score improvements of 25.82% and 17.83%. A comprehensive tunnel ceiling field test further demonstrates the effectiveness of the method. By enhancing real‐world crack detection capabilities, this approach supports better structural safety management.","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"56 1","pages":""},"PeriodicalIF":11.775,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144136756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cover Image, Volume 40, Issue 14","authors":"","doi":"10.1111/mice.13519","DOIUrl":"https://doi.org/10.1111/mice.13519","url":null,"abstract":"<p><b>The cover image</b> is based on the article <i>Spatially aware Markov chain-based deterioration prediction of bridge components using a Graph Transformer</i> by Shogo Inadomi et al., https://doi.org/10.1111/mice.13497.\u0000\u0000 <figure>\u0000 <div><picture>\u0000 <source></source></picture><p></p>\u0000 </div>\u0000 </figure></p>","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"40 14","pages":""},"PeriodicalIF":8.5,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/mice.13519","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144108853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cover Image, Volume 40, Issue 14","authors":"","doi":"10.1111/mice.13518","DOIUrl":"https://doi.org/10.1111/mice.13518","url":null,"abstract":"<p><b>The cover image</b> is based on the article <i>Automated seismic event detection considering faulty data interference using deep learning and Bayesian fusion</i> by Zhiyi Tang et al., https://doi.org/10.1111/mice.13377.\u0000\u0000 <figure>\u0000 <div><picture>\u0000 <source></source></picture><p></p>\u0000 </div>\u0000 </figure></p>","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"40 14","pages":""},"PeriodicalIF":8.5,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/mice.13518","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144108852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}