Attention based network for real-time road drivable area, lane line detection and scene identification

IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
Feng You , Yi Xie , Siyi Zhang , Hao Chen , Haiwei Wang , Wei Zhang , Jianrong Liu
{"title":"Attention based network for real-time road drivable area, lane line detection and scene identification","authors":"Feng You ,&nbsp;Yi Xie ,&nbsp;Siyi Zhang ,&nbsp;Hao Chen ,&nbsp;Haiwei Wang ,&nbsp;Wei Zhang ,&nbsp;Jianrong Liu","doi":"10.1016/j.engappai.2025.111781","DOIUrl":null,"url":null,"abstract":"<div><div>The detection of road drivable areas and lane lines is considered a fundamental component of autonomous driving systems. However, most existing approaches handle these tasks independently, and multi-task networks frequently neglect the inherent correlation between them while failing to differentiate various lane line types. In practice, the delineation of drivable regions is strongly influenced by both lane line characteristics and contextual street scenes. To address these limitations, a novel multi-task network—Real-time Road Drivable Area, Lane Line Detection, and Scene Identification Network (RLSNet)—is proposed. This network is designed to perform simultaneous segmentation of drivable areas, detection of lane lines, and classification of road scenes. Drivable area estimation is optimized through the integration of lane and scene cues, guided by traffic regulations. A Residual Network (ResNet)-based backbone is employed, enhanced with Bidirectional Fusion Attention (BFA) for feature encoding. This is followed by a decoder incorporating a Feature Aggregation Module (FAM) to enable effective semantic–spatial fusion. Lane line detection is further refined using a Bilateral Up-Sampling Decoder (BUSD), while scene understanding is enhanced via a Scene Classification Module (SCM). Extensive experiments conducted on the challenging Berkeley DeepDrive 100K(BDD100K) dataset have demonstrated that RLSNet achieves high accuracy in both drivable area and lane line detection by leveraging the mutual guidance of lane and scene information. Furthermore, the network maintains real-time inference speed at 93 frames per second (FPS), striking a practical balance between semantic fidelity and computational efficiency for real-world deployment. The implementation code has been made publicly available at: <span><span>https://github.com/033186ZSY/RLSNet-master</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"160 ","pages":"Article 111781"},"PeriodicalIF":7.5000,"publicationDate":"2025-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S095219762501783X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

The detection of road drivable areas and lane lines is considered a fundamental component of autonomous driving systems. However, most existing approaches handle these tasks independently, and multi-task networks frequently neglect the inherent correlation between them while failing to differentiate various lane line types. In practice, the delineation of drivable regions is strongly influenced by both lane line characteristics and contextual street scenes. To address these limitations, a novel multi-task network—Real-time Road Drivable Area, Lane Line Detection, and Scene Identification Network (RLSNet)—is proposed. This network is designed to perform simultaneous segmentation of drivable areas, detection of lane lines, and classification of road scenes. Drivable area estimation is optimized through the integration of lane and scene cues, guided by traffic regulations. A Residual Network (ResNet)-based backbone is employed, enhanced with Bidirectional Fusion Attention (BFA) for feature encoding. This is followed by a decoder incorporating a Feature Aggregation Module (FAM) to enable effective semantic–spatial fusion. Lane line detection is further refined using a Bilateral Up-Sampling Decoder (BUSD), while scene understanding is enhanced via a Scene Classification Module (SCM). Extensive experiments conducted on the challenging Berkeley DeepDrive 100K(BDD100K) dataset have demonstrated that RLSNet achieves high accuracy in both drivable area and lane line detection by leveraging the mutual guidance of lane and scene information. Furthermore, the network maintains real-time inference speed at 93 frames per second (FPS), striking a practical balance between semantic fidelity and computational efficiency for real-world deployment. The implementation code has been made publicly available at: https://github.com/033186ZSY/RLSNet-master.

Abstract Image

基于注意力的实时道路可行驶区域、车道线检测和场景识别网络
道路可行驶区域和车道线的检测被认为是自动驾驶系统的基本组成部分。然而,现有的大多数方法都是独立处理这些任务,而多任务网络往往忽略了它们之间的内在相关性,而不能区分各种车道线类型。在实践中,可驾驶区域的划定受到车道线特征和背景街道场景的强烈影响。为了解决这些限制,提出了一种新的多任务网络——实时道路可行驶区域、车道线检测和场景识别网络(RLSNet)。该网络旨在同时进行可驾驶区域的分割,车道线的检测和道路场景的分类。在交通规则的指导下,通过整合车道和场景线索,优化可行驶区域估计。该算法采用基于残余网络(ResNet)的骨干网络,增强了双向融合注意(BFA)来进行特征编码。随后是一个包含特征聚合模块(FAM)的解码器,以实现有效的语义空间融合。车道线检测使用双边上采样解码器(BUSD)进一步细化,而场景理解通过场景分类模块(SCM)增强。在具有挑战性的伯克利DeepDrive 100K(BDD100K)数据集上进行的大量实验表明,RLSNet通过利用车道和场景信息的相互引导,在可驾驶区域和车道线检测方面都实现了高精度。此外,该网络将实时推理速度保持在每秒93帧(FPS),在真实世界部署的语义保真度和计算效率之间取得了实际平衡。实现代码已在https://github.com/033186ZSY/RLSNet-master上公开提供。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Engineering Applications of Artificial Intelligence
Engineering Applications of Artificial Intelligence 工程技术-工程:电子与电气
CiteScore
9.60
自引率
10.00%
发文量
505
审稿时长
68 days
期刊介绍: Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信