Bryce Simmons, Pasham Adwani, Huong Pham, Yazeed Alhuthaifi, A. Wolek
{"title":"Training a Remote-Control Car to Autonomously Lane-Follow using End-to-End Neural Networks","authors":"Bryce Simmons, Pasham Adwani, Huong Pham, Yazeed Alhuthaifi, A. Wolek","doi":"10.1109/CISS.2019.8692851","DOIUrl":null,"url":null,"abstract":"This paper describes the implementation of an end-to-end learning approach that enables a small, low-cost, remote-control car to lane-follow in a simple indoor environment. A deep neural network (DNN) and a convolutional neural network (CNN) were trained to map raw images from a forward-looking camera to steering and speed commands (right, left, forward, reverse). The mechanical, electrical, and software design of the autonomous car is presented and the architectures of the DNN and CNN are discussed. The accuracy and loss of both types of neural networks is compared to two existing models VGG16 and DenseNet. A finite state machine is used to control the behavior of the car as it transitions between lane-following and stopped states during experimental demonstrations. The car enters the stopped state when either a stop sign is detected (using a Haar classifier and monocular vision) or an ultrasonic sensor indicates the presence of an obstacle.","PeriodicalId":123696,"journal":{"name":"2019 53rd Annual Conference on Information Sciences and Systems (CISS)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 53rd Annual Conference on Information Sciences and Systems (CISS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CISS.2019.8692851","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10
Abstract
This paper describes the implementation of an end-to-end learning approach that enables a small, low-cost, remote-control car to lane-follow in a simple indoor environment. A deep neural network (DNN) and a convolutional neural network (CNN) were trained to map raw images from a forward-looking camera to steering and speed commands (right, left, forward, reverse). The mechanical, electrical, and software design of the autonomous car is presented and the architectures of the DNN and CNN are discussed. The accuracy and loss of both types of neural networks is compared to two existing models VGG16 and DenseNet. A finite state machine is used to control the behavior of the car as it transitions between lane-following and stopped states during experimental demonstrations. The car enters the stopped state when either a stop sign is detected (using a Haar classifier and monocular vision) or an ultrasonic sensor indicates the presence of an obstacle.