MTFENet: A Multi-Task Autonomous Driving Network for Real-Time Target Perception

IF 4.8 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Qiang Wang;Yongchong Xue;Shuchang Lyu;Guangliang Cheng;Shaoyan Yang;Xin Jin
{"title":"MTFENet: A Multi-Task Autonomous Driving Network for Real-Time Target Perception","authors":"Qiang Wang;Yongchong Xue;Shuchang Lyu;Guangliang Cheng;Shaoyan Yang;Xin Jin","doi":"10.1109/OJVT.2025.3600512","DOIUrl":null,"url":null,"abstract":"Effective autonomous driving systems require a delicate balance of high precision, efficient design, and immediate response capabilities. This study presents MTFENet, a cutting-edge multi-task deep learning model that optimizes network architecture to harmonize speed and accuracy for critical tasks such as object detection, drivable area segmentation, and lane line segmentation. Our end-to-end, streamlined multi-task model incorporates an Adaptive Feature Fusion Module (AF<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>M) to manage the diverse feature demands of different tasks. We also introduced a fusion transform module (FTM) to strengthen global feature extraction and a novel detection head to address target loss and confusion. To enhance computational efficiency, we refined the segmentation head design. Experiments on the BDD100k dataset reveal that MTFENet delivers exceptional performance, achieving an mAP50 of 81.5% in object detection, an mIoU of 93.8% in drivable area segmentation, and an IoU of 33.7% in lane line segmentation. Real-world scenario evaluations demonstrate that MTFENet substantially outperforms current state-of-the-art models across multiple tasks, highlighting its superior adaptability and swift response. These results underscore that MTFENet not only leads in precision and speed but also bolsters the reliability and adaptability of autonomous driving systems in navigating complex road conditions.","PeriodicalId":34270,"journal":{"name":"IEEE Open Journal of Vehicular Technology","volume":"6 ","pages":"2406-2423"},"PeriodicalIF":4.8000,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11130405","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of Vehicular Technology","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11130405/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Effective autonomous driving systems require a delicate balance of high precision, efficient design, and immediate response capabilities. This study presents MTFENet, a cutting-edge multi-task deep learning model that optimizes network architecture to harmonize speed and accuracy for critical tasks such as object detection, drivable area segmentation, and lane line segmentation. Our end-to-end, streamlined multi-task model incorporates an Adaptive Feature Fusion Module (AF$^{2}$M) to manage the diverse feature demands of different tasks. We also introduced a fusion transform module (FTM) to strengthen global feature extraction and a novel detection head to address target loss and confusion. To enhance computational efficiency, we refined the segmentation head design. Experiments on the BDD100k dataset reveal that MTFENet delivers exceptional performance, achieving an mAP50 of 81.5% in object detection, an mIoU of 93.8% in drivable area segmentation, and an IoU of 33.7% in lane line segmentation. Real-world scenario evaluations demonstrate that MTFENet substantially outperforms current state-of-the-art models across multiple tasks, highlighting its superior adaptability and swift response. These results underscore that MTFENet not only leads in precision and speed but also bolsters the reliability and adaptability of autonomous driving systems in navigating complex road conditions.
MTFENet:一种实时目标感知的多任务自动驾驶网络
有效的自动驾驶系统需要高精度、高效设计和即时响应能力之间的微妙平衡。本研究提出了MTFENet,这是一种前沿的多任务深度学习模型,可优化网络架构,以协调关键任务(如目标检测、可驾驶区域分割和车道线分割)的速度和准确性。我们的端到端、流线型多任务模型包含一个自适应特征融合模块(AF$^{2}$M),以管理不同任务的不同特征需求。我们还引入了一个融合变换模块(FTM)来加强全局特征提取,并引入了一个新的检测头来解决目标丢失和混淆。为了提高计算效率,我们改进了分割头的设计。在BDD100k数据集上的实验表明,MTFENet具有出色的性能,在目标检测方面的mAP50为81.5%,在可行驶区域分割方面的mIoU为93.8%,在车道线分割方面的IoU为33.7%。实际场景评估表明,MTFENet在多个任务上的表现大大优于当前最先进的模型,突出了其优越的适应性和快速响应。这些结果强调,MTFENet不仅在精度和速度方面领先,而且还增强了自动驾驶系统在复杂道路条件下导航的可靠性和适应性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
9.60
自引率
0.00%
发文量
25
审稿时长
10 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信