Lane detection using hough transformation and Yolov8

Bach Nguyen Viet, Tung Pham Xuan
{"title":"Lane detection using hough transformation and Yolov8","authors":"Bach Nguyen Viet, Tung Pham Xuan","doi":"10.47869/tcsj.75.4.15","DOIUrl":null,"url":null,"abstract":"Autonomous vehicles necessitate the integration of advanced technologies such as computer vision and deep learning to comprehend and navigate their surroundings. A crucial yet challenging component of this integration is the accurate detection of lanes, which can be influenced by a multitude of varying lane characteristics and conditions. This research undertakes a comparative analysis of lane detection methodologies, explicitly focusing on traditional image processing techniques and Convolutional Neural Networks (CNNs). The evaluation utilized a sample of 500 images from the CULane dataset, which encompasses a diverse range of traffic scenarios. Initially, a method incorporating Gaussian blurring, Canny edge detection, and Hough line transformation was examined. Despite its efficiency, operating at 30 frames per second, this approach exhibited a high error rate (average Mean Squared Error (MSE) of 0.537), which is attributable to the loss of critical image details during the preprocessing stage. Subsequently, the performance of a fine-tuned YOLOv8 model, trained on a reformatted version of the CULane dataset was assessed. The combination of object detection and subsequent Hough transformation yielded high accuracy, demonstrating the model’s ability to learn and identify relevant lane features. The deep CNNs demonstrated superior performance over classical image processing techniques in terms of lane detection accuracy, thereby underscoring their potential applicability within the realm of autonomous vehicle technology","PeriodicalId":235443,"journal":{"name":"Transport and Communications Science Journal","volume":"3 4","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transport and Communications Science Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.47869/tcsj.75.4.15","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Autonomous vehicles necessitate the integration of advanced technologies such as computer vision and deep learning to comprehend and navigate their surroundings. A crucial yet challenging component of this integration is the accurate detection of lanes, which can be influenced by a multitude of varying lane characteristics and conditions. This research undertakes a comparative analysis of lane detection methodologies, explicitly focusing on traditional image processing techniques and Convolutional Neural Networks (CNNs). The evaluation utilized a sample of 500 images from the CULane dataset, which encompasses a diverse range of traffic scenarios. Initially, a method incorporating Gaussian blurring, Canny edge detection, and Hough line transformation was examined. Despite its efficiency, operating at 30 frames per second, this approach exhibited a high error rate (average Mean Squared Error (MSE) of 0.537), which is attributable to the loss of critical image details during the preprocessing stage. Subsequently, the performance of a fine-tuned YOLOv8 model, trained on a reformatted version of the CULane dataset was assessed. The combination of object detection and subsequent Hough transformation yielded high accuracy, demonstrating the model’s ability to learn and identify relevant lane features. The deep CNNs demonstrated superior performance over classical image processing techniques in terms of lane detection accuracy, thereby underscoring their potential applicability within the realm of autonomous vehicle technology
使用霍夫变换和 Yolov8 进行车道检测
自动驾驶汽车需要集成计算机视觉和深度学习等先进技术,以理解和导航周围环境。这种整合的一个关键但又极具挑战性的组成部分是车道的准确检测,而车道检测可能会受到多种不同车道特征和条件的影响。本研究对车道检测方法进行了比较分析,重点关注传统图像处理技术和卷积神经网络(CNN)。评估使用了 CULane 数据集中的 500 幅图像样本,其中包含各种交通场景。首先,对一种包含高斯模糊、Canny 边缘检测和 Hough 线变换的方法进行了测试。这种方法以每秒 30 帧的速度运行,尽管效率很高,但却表现出很高的错误率(平均均方误差 (MSE) 为 0.537),这是由于在预处理阶段丢失了关键的图像细节。随后,对经过微调的 YOLOv8 模型的性能进行了评估,该模型是在重新格式化的 CULane 数据集上训练的。物体检测与随后的 Hough 变换相结合产生了很高的准确率,证明了该模型学习和识别相关车道特征的能力。在车道检测准确性方面,深度 CNN 的表现优于传统的图像处理技术,从而凸显了其在自动驾驶汽车技术领域的潜在适用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信