Automated instance segmentation of asphalt pavement patches based on deep learning

Anzheng He, Allen A. Zhang, Xinyi Xu, Yue Ding, Hang Zhang, Zishuo Dong
{"title":"Automated instance segmentation of asphalt pavement patches based on deep learning","authors":"Anzheng He, Allen A. Zhang, Xinyi Xu, Yue Ding, Hang Zhang, Zishuo Dong","doi":"10.1177/14759217241242428","DOIUrl":null,"url":null,"abstract":"The location and pixel-level information of the patch are all critical data for the quantitative evaluation of pavement conditions. However, obtaining both parch location and pixel-level information simultaneously is a challenge in intelligent pavement patch surveys. This paper proposes a deep-learning-based patch instance segmentation network (PISNet) that employs you only look once (YOLO)v5 as the baseline and adds a semantic segmentation branch to provide an effective solution for this challenge. The proposed PISNet replaces the original backbone CSPDarknet53 and neck of YOLOv5 with a novel feature extractor named symmetrical pyramid network (SPN). The proposed SPN aims at repeating fusion and transfer of shallow semantic features and deep spatial localization features in the order of “FPN-PAN-FPN” such that the multi-scale semantic expression and localization ability of the feature map could be enhanced. Moreover, a modified feature selection module is also incorporated into the SPN as a skip connection to aggregate more spatial details of the feature map while suppressing redundant features. Experimental results show that compared with Mask region convolutional neural network (R-CNN), You only look at coefficients (YOLACT), YOLACT++, EfficientDet, Fully convolutional one stage object detector (FCOS), You only look once version 5m (YOLOv5m), U-Net, DeepLabv3+, and High resolution network-object contextual representations (HRNet-OCR), the proposed PISNet has the best detection performance. Meanwhile, the proposed PISNet achieves superior accuracy/frames per second trade-offs compared to Mask R-CNN, YOLACT, and YOLACT++. Particularly, the proposed PISNet has certain promising potential in supporting pavement patch detection in real-time scenarios and potentially degraded pavement patch detection. Moreover, the proposed PISNet can yield superior segmentation results compared with Mask R-CNN, YOLACT, YOLACT++, U-Net, HRNet-OCR, and DeepLabv3+ on public CRACK500 datasets. Code has been made available at: https://github.com/716HAZ/PISNet .","PeriodicalId":515545,"journal":{"name":"Structural Health Monitoring","volume":"7 14","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Structural Health Monitoring","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/14759217241242428","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The location and pixel-level information of the patch are all critical data for the quantitative evaluation of pavement conditions. However, obtaining both parch location and pixel-level information simultaneously is a challenge in intelligent pavement patch surveys. This paper proposes a deep-learning-based patch instance segmentation network (PISNet) that employs you only look once (YOLO)v5 as the baseline and adds a semantic segmentation branch to provide an effective solution for this challenge. The proposed PISNet replaces the original backbone CSPDarknet53 and neck of YOLOv5 with a novel feature extractor named symmetrical pyramid network (SPN). The proposed SPN aims at repeating fusion and transfer of shallow semantic features and deep spatial localization features in the order of “FPN-PAN-FPN” such that the multi-scale semantic expression and localization ability of the feature map could be enhanced. Moreover, a modified feature selection module is also incorporated into the SPN as a skip connection to aggregate more spatial details of the feature map while suppressing redundant features. Experimental results show that compared with Mask region convolutional neural network (R-CNN), You only look at coefficients (YOLACT), YOLACT++, EfficientDet, Fully convolutional one stage object detector (FCOS), You only look once version 5m (YOLOv5m), U-Net, DeepLabv3+, and High resolution network-object contextual representations (HRNet-OCR), the proposed PISNet has the best detection performance. Meanwhile, the proposed PISNet achieves superior accuracy/frames per second trade-offs compared to Mask R-CNN, YOLACT, and YOLACT++. Particularly, the proposed PISNet has certain promising potential in supporting pavement patch detection in real-time scenarios and potentially degraded pavement patch detection. Moreover, the proposed PISNet can yield superior segmentation results compared with Mask R-CNN, YOLACT, YOLACT++, U-Net, HRNet-OCR, and DeepLabv3+ on public CRACK500 datasets. Code has been made available at: https://github.com/716HAZ/PISNet .
基于深度学习的沥青路面斑块自动实例分割
补丁的位置和像素级信息都是定量评估路面状况的关键数据。然而,如何同时获取斑块位置和像素级信息是路面斑块智能勘测中的一个难题。本文提出了一种基于深度学习的补丁实例分割网络(PISNet),它以只看一次(YOLO)v5 为基线,并增加了语义分割分支,为这一难题提供了有效的解决方案。拟议的 PISNet 以一种名为对称金字塔网络(SPN)的新型特征提取器取代了 YOLOv5 的原始主干 CSPDarknet53 和颈部。所提出的 SPN 旨在按照 "FPN-PAN-FPN "的顺序重复融合和传递浅层语义特征和深层空间定位特征,从而增强特征图的多尺度语义表达和定位能力。此外,还在 SPN 中加入了一个改进的特征选择模块,作为跳转连接,在抑制冗余特征的同时,聚合特征图的更多空间细节。实验结果表明,与掩膜区域卷积神经网络(R-CNN)、只看系数(YOLACT)、YOLACT++、EfficientDet、全卷积一级物体检测器(FCOS)、只看一次 5m 版(YOLOv5m)、U-Net、DeepLabv3+ 和高分辨率网络-物体上下文表示(HRNet-OCR)相比,所提出的 PISNet 具有最佳的检测性能。同时,与 Mask R-CNN、YOLACT 和 YOLACT++ 相比,所提出的 PISNet 实现了更高的精度/每秒帧数权衡。特别是,所提出的 PISNet 在支持实时场景下的路面斑块检测和潜在降级路面斑块检测方面具有一定的潜力。此外,在公开的 CRACK500 数据集上,与 Mask R-CNN、YOLACT、YOLACT++、U-Net、HRNet-OCR 和 DeepLabv3+ 相比,所提出的 PISNet 可以获得更优越的分割结果。代码可在以下网址获取: https://github.com/716HAZ/PISNet 。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信