{"title":"Neural Network Based Corn Field Furrow Detection for Autonomous Navigation in Agriculture Vehicles","authors":"Niko Anthony Simon, Cheol-Hong Min","doi":"10.1109/IEMTRONICS51293.2020.9216347","DOIUrl":null,"url":null,"abstract":"Row detection in agricultural applications has commonly used Hough transform techniques and traditional signal processing based approaches relating to machine vision. There are various learning based methods available that are capable of producing similar results in terms of detection. In this paper, a neural network based algorithm is developed, and we compare the Hough transform and a machine learning implementation with the proposed approach to determine which would be the most appropriate in a real-time application given a variety of factors including computational performance, accuracy, and environmental variability. Compared to the learning based approaches which rely on training data, Hough transform based detection relies on a variety of processes, including binarization and denoising, which are not required to be explicitly implemented in the machine learning or neural network models. Additionally, to add another layer of diversity to the three possible solutions examined is the consideration for color input data. The Hough transform method and the neural network model implemented both require color input data while the machine learning model relies on texture features instead of color to make its classification predictions. Compared to the traditional image understanding techniques, autonomous vehicles face challenges due to similarities in color and texture between the crops and their surroundings. Therefore, the algorithm is developed to overcome such challenges. Preliminary results show that the neural network model developed was found to offer the most versatility compared to traditional methods and the highest accuracy on the order of 97% for this application across several different input conditions.","PeriodicalId":269697,"journal":{"name":"2020 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IEMTRONICS51293.2020.9216347","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Row detection in agricultural applications has commonly used Hough transform techniques and traditional signal processing based approaches relating to machine vision. There are various learning based methods available that are capable of producing similar results in terms of detection. In this paper, a neural network based algorithm is developed, and we compare the Hough transform and a machine learning implementation with the proposed approach to determine which would be the most appropriate in a real-time application given a variety of factors including computational performance, accuracy, and environmental variability. Compared to the learning based approaches which rely on training data, Hough transform based detection relies on a variety of processes, including binarization and denoising, which are not required to be explicitly implemented in the machine learning or neural network models. Additionally, to add another layer of diversity to the three possible solutions examined is the consideration for color input data. The Hough transform method and the neural network model implemented both require color input data while the machine learning model relies on texture features instead of color to make its classification predictions. Compared to the traditional image understanding techniques, autonomous vehicles face challenges due to similarities in color and texture between the crops and their surroundings. Therefore, the algorithm is developed to overcome such challenges. Preliminary results show that the neural network model developed was found to offer the most versatility compared to traditional methods and the highest accuracy on the order of 97% for this application across several different input conditions.