A novel neural network architecture and cross-model transfer learning for multi-task autonomous driving

IF 1.7 4区 计算机科学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS
Youwei Li, Jian Qu
{"title":"A novel neural network architecture and cross-model transfer learning for multi-task autonomous driving","authors":"Youwei Li, Jian Qu","doi":"10.1108/dta-08-2022-0307","DOIUrl":null,"url":null,"abstract":"<h3>Purpose</h3>\n<p>The purpose of this research is to achieve multi-task autonomous driving by adjusting the network architecture of the model. Meanwhile, after achieving multi-task autonomous driving, the authors found that the trained neural network model performs poorly in untrained scenarios. Therefore, the authors proposed to improve the transfer efficiency of the model for new scenarios through transfer learning.</p><!--/ Abstract__block -->\n<h3>Design/methodology/approach</h3>\n<p>First, the authors achieved multi-task autonomous driving by training a model combining convolutional neural network and different structured long short-term memory (LSTM) layers. Second, the authors achieved fast transfer of neural network models in new scenarios by cross-model transfer learning. Finally, the authors combined data collection and data labeling to improve the efficiency of deep learning. Furthermore, the authors verified that the model has good robustness through light and shadow test.</p><!--/ Abstract__block -->\n<h3>Findings</h3>\n<p>This research achieved road tracking, real-time acceleration–deceleration, obstacle avoidance and left/right sign recognition. The model proposed by the authors (UniBiCLSTM) outperforms the existing models tested with model cars in terms of autonomous driving performance. Furthermore, the CMTL-UniBiCL-RL model trained by the authors through cross-model transfer learning improves the efficiency of model adaptation to new scenarios. Meanwhile, this research proposed an automatic data annotation method, which can save 1/4 of the time for deep learning.</p><!--/ Abstract__block -->\n<h3>Originality/value</h3>\n<p>This research provided novel solutions in the achievement of multi-task autonomous driving and neural network model scenario for transfer learning. The experiment was achieved on a single camera with an embedded chip and a scale model car, which is expected to simplify the hardware for autonomous driving.</p><!--/ Abstract__block -->","PeriodicalId":56156,"journal":{"name":"Data Technologies and Applications","volume":"6 1","pages":""},"PeriodicalIF":1.7000,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Data Technologies and Applications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1108/dta-08-2022-0307","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose

The purpose of this research is to achieve multi-task autonomous driving by adjusting the network architecture of the model. Meanwhile, after achieving multi-task autonomous driving, the authors found that the trained neural network model performs poorly in untrained scenarios. Therefore, the authors proposed to improve the transfer efficiency of the model for new scenarios through transfer learning.

Design/methodology/approach

First, the authors achieved multi-task autonomous driving by training a model combining convolutional neural network and different structured long short-term memory (LSTM) layers. Second, the authors achieved fast transfer of neural network models in new scenarios by cross-model transfer learning. Finally, the authors combined data collection and data labeling to improve the efficiency of deep learning. Furthermore, the authors verified that the model has good robustness through light and shadow test.

Findings

This research achieved road tracking, real-time acceleration–deceleration, obstacle avoidance and left/right sign recognition. The model proposed by the authors (UniBiCLSTM) outperforms the existing models tested with model cars in terms of autonomous driving performance. Furthermore, the CMTL-UniBiCL-RL model trained by the authors through cross-model transfer learning improves the efficiency of model adaptation to new scenarios. Meanwhile, this research proposed an automatic data annotation method, which can save 1/4 of the time for deep learning.

Originality/value

This research provided novel solutions in the achievement of multi-task autonomous driving and neural network model scenario for transfer learning. The experiment was achieved on a single camera with an embedded chip and a scale model car, which is expected to simplify the hardware for autonomous driving.

用于多任务自动驾驶的新型神经网络架构和交叉模型迁移学习
目的本研究的目的是通过调整模型的网络结构来实现多任务自动驾驶。同时,在实现多任务自动驾驶后,作者发现经过训练的神经网络模型在未经训练的场景中表现不佳。因此,作者提出通过迁移学习提高模型在新场景下的迁移效率。首先,作者通过训练一个结合了卷积神经网络和不同结构的长短期记忆(LSTM)层的模型实现了多任务自动驾驶。其次,作者通过交叉模型迁移学习实现了神经网络模型在新场景中的快速迁移。最后,作者将数据收集和数据标注结合起来,提高了深度学习的效率。此外,作者还通过光影测试验证了模型具有良好的鲁棒性。研究结果这项研究实现了道路跟踪、实时加减速、避障和左右标志识别。作者提出的模型(UniBiCLSTM)在自动驾驶性能方面优于使用模型车测试的现有模型。此外,作者通过交叉模型迁移学习训练的 CMTL-UniBiCL-RL 模型提高了模型适应新场景的效率。同时,该研究提出了一种自动数据标注方法,可为深度学习节省1/4的时间。原创性/价值该研究为实现多任务自动驾驶和神经网络模型场景下的迁移学习提供了新颖的解决方案。实验在嵌入式芯片的单摄像头和比例模型车上实现,有望简化自动驾驶的硬件。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Data Technologies and Applications
Data Technologies and Applications Social Sciences-Library and Information Sciences
CiteScore
3.80
自引率
6.20%
发文量
29
期刊介绍: Previously published as: Program Online from: 2018 Subject Area: Information & Knowledge Management, Library Studies
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信