Gang Xu , Yingshui Zhang , Qingrui Yue , Xiaogang Liu
{"title":"A deep learning framework for real-time multi-task recognition and measurement of concrete cracks","authors":"Gang Xu , Yingshui Zhang , Qingrui Yue , Xiaogang Liu","doi":"10.1016/j.aei.2025.103127","DOIUrl":null,"url":null,"abstract":"<div><div>This study presents an innovative deep learning framework, YOLO-DL, for automatic multi-task recognition of concrete cracks. The framework integrates the You Only Look Once (YOLO) object detection algorithm with the encoder-decoder architecture of the DeepLabv3 + model, incorporating an attention mechanism and a calibration module, resulting in three distinct branches for crack classification, localization detection, and semantic segmentation. The YOLO-DL model achieves a detection precision of 84.87 %, an [email protected] of 83.55 %, and a mean intersection-over-union (mIoU) of 94.94 % for crack segmentation. The model’s segmentation inference time is significantly shorter than that of the DeepLabv3+, fully convolutional networks (FCN), U-Net, and SegNet models, making it suitable for real-time concrete crack recognition. The model effectively handles classification, detection, and segmentation tasks, demonstrating enhanced performance and robustness, particularly with the inclusion of the attention mechanism. Additionally, a novel crack width measurement method based on the local element grid method is presented, achieving sub-pixel precision. This method provides comprehensive crack width information, including the maximum width of each crack and its corresponding location, with a maximum relative error of less than 10 %. The findings highlight the model’s strong inference performance, robust generalization ability, and promising real-time crack recognition capabilities.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"65 ","pages":"Article 103127"},"PeriodicalIF":8.0000,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced Engineering Informatics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1474034625000205","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
This study presents an innovative deep learning framework, YOLO-DL, for automatic multi-task recognition of concrete cracks. The framework integrates the You Only Look Once (YOLO) object detection algorithm with the encoder-decoder architecture of the DeepLabv3 + model, incorporating an attention mechanism and a calibration module, resulting in three distinct branches for crack classification, localization detection, and semantic segmentation. The YOLO-DL model achieves a detection precision of 84.87 %, an [email protected] of 83.55 %, and a mean intersection-over-union (mIoU) of 94.94 % for crack segmentation. The model’s segmentation inference time is significantly shorter than that of the DeepLabv3+, fully convolutional networks (FCN), U-Net, and SegNet models, making it suitable for real-time concrete crack recognition. The model effectively handles classification, detection, and segmentation tasks, demonstrating enhanced performance and robustness, particularly with the inclusion of the attention mechanism. Additionally, a novel crack width measurement method based on the local element grid method is presented, achieving sub-pixel precision. This method provides comprehensive crack width information, including the maximum width of each crack and its corresponding location, with a maximum relative error of less than 10 %. The findings highlight the model’s strong inference performance, robust generalization ability, and promising real-time crack recognition capabilities.
期刊介绍:
Advanced Engineering Informatics is an international Journal that solicits research papers with an emphasis on 'knowledge' and 'engineering applications'. The Journal seeks original papers that report progress in applying methods of engineering informatics. These papers should have engineering relevance and help provide a scientific base for more reliable, spontaneous, and creative engineering decision-making. Additionally, papers should demonstrate the science of supporting knowledge-intensive engineering tasks and validate the generality, power, and scalability of new methods through rigorous evaluation, preferably both qualitatively and quantitatively. Abstracting and indexing for Advanced Engineering Informatics include Science Citation Index Expanded, Scopus and INSPEC.