{"title":"Tunnel lining segmentation from ground-penetrating radar images using advanced single- and two-stage object detection and segmentation models","authors":"Byongkyu Bae, Yongjin Choi, Hyunjun Jung, Jaehun Ahn","doi":"10.1111/mice.13528","DOIUrl":null,"url":null,"abstract":"Recent advances in deep learning have enabled automated ground-penetrating radar (GPR) image analysis, particularly through two-stage models like mask region-based convolutional neural (Mask R-CNN) and single-stage models like you only look once (YOLO), which are two mainstream approaches for object detection and segmentation tasks. Despite their potential, the limited comparative analysis of these methods obscures the optimal model selection for practical field applications in tunnel lining inspection. This study addresses this gap by evaluating the performance of Mask R-CNN and YOLOv8 for tunnel lining detection and segmentation in GPR images. Both models are trained using the labeled GPR image datasets for tunnel lining and evaluate their prediction accuracy and consistency based on the intersection over union (IoU) metric. The results show that Mask R-CNN with ResNeXt backbone achieves superior segmentation accuracy with an average IoU of 0.973, while YOLOv8 attains an IoU of 0.894 with higher variability in prediction accuracy and occasional failures in detection. However, YOLOv8 offers faster processing times in terms of training and inference. It appears Mask R-CNN still excels in accuracy in tunnel GPR lining detection, although recent advancements of the YOLOs often outperform the accuracy of the Mask R-CNN in a few specific tasks. We also show that ResNeXt-enhanced Mask R-CNN further improves the accuracy of the traditional ResNet-based Mask R-CNN. The research finding offers useful insights into the trade-offs between the accuracy, consistency, and computational efficiency of the two mainstream models for the tunnel lining identification task in GPR images. The finding is expected to offer guidance for the future selection and development of optimal deep learning-based inspection models for practical field applications.","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"8 1","pages":""},"PeriodicalIF":8.5000,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer-Aided Civil and Infrastructure Engineering","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1111/mice.13528","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
Recent advances in deep learning have enabled automated ground-penetrating radar (GPR) image analysis, particularly through two-stage models like mask region-based convolutional neural (Mask R-CNN) and single-stage models like you only look once (YOLO), which are two mainstream approaches for object detection and segmentation tasks. Despite their potential, the limited comparative analysis of these methods obscures the optimal model selection for practical field applications in tunnel lining inspection. This study addresses this gap by evaluating the performance of Mask R-CNN and YOLOv8 for tunnel lining detection and segmentation in GPR images. Both models are trained using the labeled GPR image datasets for tunnel lining and evaluate their prediction accuracy and consistency based on the intersection over union (IoU) metric. The results show that Mask R-CNN with ResNeXt backbone achieves superior segmentation accuracy with an average IoU of 0.973, while YOLOv8 attains an IoU of 0.894 with higher variability in prediction accuracy and occasional failures in detection. However, YOLOv8 offers faster processing times in terms of training and inference. It appears Mask R-CNN still excels in accuracy in tunnel GPR lining detection, although recent advancements of the YOLOs often outperform the accuracy of the Mask R-CNN in a few specific tasks. We also show that ResNeXt-enhanced Mask R-CNN further improves the accuracy of the traditional ResNet-based Mask R-CNN. The research finding offers useful insights into the trade-offs between the accuracy, consistency, and computational efficiency of the two mainstream models for the tunnel lining identification task in GPR images. The finding is expected to offer guidance for the future selection and development of optimal deep learning-based inspection models for practical field applications.
期刊介绍:
Computer-Aided Civil and Infrastructure Engineering stands as a scholarly, peer-reviewed archival journal, serving as a vital link between advancements in computer technology and civil and infrastructure engineering. The journal serves as a distinctive platform for the publication of original articles, spotlighting novel computational techniques and inventive applications of computers. Specifically, it concentrates on recent progress in computer and information technologies, fostering the development and application of emerging computing paradigms.
Encompassing a broad scope, the journal addresses bridge, construction, environmental, highway, geotechnical, structural, transportation, and water resources engineering. It extends its reach to the management of infrastructure systems, covering domains such as highways, bridges, pavements, airports, and utilities. The journal delves into areas like artificial intelligence, cognitive modeling, concurrent engineering, database management, distributed computing, evolutionary computing, fuzzy logic, genetic algorithms, geometric modeling, internet-based technologies, knowledge discovery and engineering, machine learning, mobile computing, multimedia technologies, networking, neural network computing, optimization and search, parallel processing, robotics, smart structures, software engineering, virtual reality, and visualization techniques.