自动驾驶汽车转向角度控制的端到端深度学习模型

Abida Khanum, Chao-Yang Lee, Chu-Sing Yang
{"title":"自动驾驶汽车转向角度控制的端到端深度学习模型","authors":"Abida Khanum, Chao-Yang Lee, Chu-Sing Yang","doi":"10.1109/IS3C50286.2020.00056","DOIUrl":null,"url":null,"abstract":"Recently brilliant evolutions in the machine learning research area of autonomous self-driving vehicles. Unlike a modern rule-based method, this study has been supervised on the manipulate of images end-to-end, which is deep learning. The motivation of this paper where the input to the model is the camera image and the output is the steering angle target. The model trained a Residual Neural Network (ResNet) convolutional neural network(CNN) algorithm to drive an autonomous vehicle in the simulator. Therefore, the model trained and simulation are conducted using the UDACITY platform. The simulator has two choices one is the training and the second one is autonomous. The autonomous has two tracks track _1 considered as simple and track _2 complex as compare to track_1. In our paper, we used track_1 for autonomous driving in the simulator. The training option gives the recorded dataset its control through the keyboard in the simulator. We collected about 11655 images (left, center, right) with four attributes (steering, throttle, brake, speed) and also images dataset stored in a folder and attributes dataset save as CSV file in the same path. The stored raw images and steering angle data set used in this method. We divided 80–20 data set for training and Validation as shown in Table I. Images were sequentially fed into the convolutional neural network (ResNet)to predict the driving factors for making end planning decisions and execution of autonomous motion of vehicles. The loss value of the proposed model is 0.0418 as shown in Figure 2. The method trained takes succeeded precision of 0.81% is good consent with expected performance.","PeriodicalId":143430,"journal":{"name":"2020 International Symposium on Computer, Consumer and Control (IS3C)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"End-to-End Deep Learning Model for Steering Angle Control of Autonomous Vehicles\",\"authors\":\"Abida Khanum, Chao-Yang Lee, Chu-Sing Yang\",\"doi\":\"10.1109/IS3C50286.2020.00056\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recently brilliant evolutions in the machine learning research area of autonomous self-driving vehicles. Unlike a modern rule-based method, this study has been supervised on the manipulate of images end-to-end, which is deep learning. The motivation of this paper where the input to the model is the camera image and the output is the steering angle target. The model trained a Residual Neural Network (ResNet) convolutional neural network(CNN) algorithm to drive an autonomous vehicle in the simulator. Therefore, the model trained and simulation are conducted using the UDACITY platform. The simulator has two choices one is the training and the second one is autonomous. The autonomous has two tracks track _1 considered as simple and track _2 complex as compare to track_1. In our paper, we used track_1 for autonomous driving in the simulator. The training option gives the recorded dataset its control through the keyboard in the simulator. We collected about 11655 images (left, center, right) with four attributes (steering, throttle, brake, speed) and also images dataset stored in a folder and attributes dataset save as CSV file in the same path. The stored raw images and steering angle data set used in this method. We divided 80–20 data set for training and Validation as shown in Table I. Images were sequentially fed into the convolutional neural network (ResNet)to predict the driving factors for making end planning decisions and execution of autonomous motion of vehicles. The loss value of the proposed model is 0.0418 as shown in Figure 2. The method trained takes succeeded precision of 0.81% is good consent with expected performance.\",\"PeriodicalId\":143430,\"journal\":{\"name\":\"2020 International Symposium on Computer, Consumer and Control (IS3C)\",\"volume\":\"25 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 International Symposium on Computer, Consumer and Control (IS3C)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IS3C50286.2020.00056\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Symposium on Computer, Consumer and Control (IS3C)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IS3C50286.2020.00056","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

摘要

最近在自动驾驶汽车的机器学习研究领域取得了辉煌的进展。与现代基于规则的方法不同,本研究对端到端的图像操作进行了监督,这是深度学习。本文的动机是,模型的输入是摄像机图像,输出是转向角目标。该模型训练残差神经网络(ResNet)卷积神经网络(CNN)算法在模拟器中驾驶自动驾驶汽车。因此,使用UDACITY平台对模型进行训练和仿真。仿真器有两种选择,一种是训练的,另一种是自主的。自治轨道有两条轨道轨道_1是简单轨道_2是复杂轨道。在本文中,我们在模拟器中使用track_1进行自动驾驶。训练选项通过模拟器中的键盘对记录的数据集进行控制。我们收集了大约11655张图像(左,中,右),有四个属性(转向,油门,刹车,速度),以及保存在文件夹中的图像数据集和保存在同一路径下的CSV文件的属性数据集。该方法使用存储的原始图像和转向角度数据集。我们将80-20个数据集划分为训练和验证数据集,如表1所示。将图像依次输入卷积神经网络(ResNet),预测车辆自主运动最终规划决策和执行的驱动因素。所提模型的损失值为0.0418,如图2所示。该方法训练后的准确率为0.81%,与预期性能吻合较好。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
End-to-End Deep Learning Model for Steering Angle Control of Autonomous Vehicles
Recently brilliant evolutions in the machine learning research area of autonomous self-driving vehicles. Unlike a modern rule-based method, this study has been supervised on the manipulate of images end-to-end, which is deep learning. The motivation of this paper where the input to the model is the camera image and the output is the steering angle target. The model trained a Residual Neural Network (ResNet) convolutional neural network(CNN) algorithm to drive an autonomous vehicle in the simulator. Therefore, the model trained and simulation are conducted using the UDACITY platform. The simulator has two choices one is the training and the second one is autonomous. The autonomous has two tracks track _1 considered as simple and track _2 complex as compare to track_1. In our paper, we used track_1 for autonomous driving in the simulator. The training option gives the recorded dataset its control through the keyboard in the simulator. We collected about 11655 images (left, center, right) with four attributes (steering, throttle, brake, speed) and also images dataset stored in a folder and attributes dataset save as CSV file in the same path. The stored raw images and steering angle data set used in this method. We divided 80–20 data set for training and Validation as shown in Table I. Images were sequentially fed into the convolutional neural network (ResNet)to predict the driving factors for making end planning decisions and execution of autonomous motion of vehicles. The loss value of the proposed model is 0.0418 as shown in Figure 2. The method trained takes succeeded precision of 0.81% is good consent with expected performance.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信