基于视觉的电动汽车辅助驾驶物体定位与分类

IF 7 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Alfredo Medina-Garcia, Jonathan Duarte-Jasso, J. Cardenas-Cornejo, Yair A. Andrade-Ambriz, Marco-Antonio Garcia-Montoya, M. Ibarra-Manzano, Dora Almanza-Ojeda
{"title":"基于视觉的电动汽车辅助驾驶物体定位与分类","authors":"Alfredo Medina-Garcia, Jonathan Duarte-Jasso, J. Cardenas-Cornejo, Yair A. Andrade-Ambriz, Marco-Antonio Garcia-Montoya, M. Ibarra-Manzano, Dora Almanza-Ojeda","doi":"10.3390/smartcities7010002","DOIUrl":null,"url":null,"abstract":"The continuous advances in intelligent systems and cutting-edge technology have greatly influenced the development of intelligent vehicles. Recently, integrating multiple sensors in cars has improved and spread the advanced drive-assistance systems (ADAS) solutions for achieving the goal of total autonomy. Despite current self-driving approaches and systems, autonomous driving is still an open research issue that must guarantee the safety and reliability of drivers. This work employs images from two cameras and Global Positioning System (GPS) data to propose a 3D vision-based object localization and classification method for assisting a car during driving. The experimental platform is a prototype of a two-sitter electric vehicle designed and assembled for navigating the campus under controlled mobility conditions. Simultaneously, color and depth images from the primary camera are combined to extract 2D features, which are reprojected into 3D space. Road detection and depth features isolate point clouds representing the objects to construct the occupancy map of the environment. A convolutional neural network was trained to classify typical urban objects in the color images. Experimental tests validate car and object pose in the occupancy map for different scenarios, reinforcing the car position visually estimated with GPS measurements.","PeriodicalId":34482,"journal":{"name":"Smart Cities","volume":"2 4","pages":""},"PeriodicalIF":7.0000,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Vision-Based Object Localization and Classification for Electric Vehicle Driving Assistance\",\"authors\":\"Alfredo Medina-Garcia, Jonathan Duarte-Jasso, J. Cardenas-Cornejo, Yair A. Andrade-Ambriz, Marco-Antonio Garcia-Montoya, M. Ibarra-Manzano, Dora Almanza-Ojeda\",\"doi\":\"10.3390/smartcities7010002\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The continuous advances in intelligent systems and cutting-edge technology have greatly influenced the development of intelligent vehicles. Recently, integrating multiple sensors in cars has improved and spread the advanced drive-assistance systems (ADAS) solutions for achieving the goal of total autonomy. Despite current self-driving approaches and systems, autonomous driving is still an open research issue that must guarantee the safety and reliability of drivers. This work employs images from two cameras and Global Positioning System (GPS) data to propose a 3D vision-based object localization and classification method for assisting a car during driving. The experimental platform is a prototype of a two-sitter electric vehicle designed and assembled for navigating the campus under controlled mobility conditions. Simultaneously, color and depth images from the primary camera are combined to extract 2D features, which are reprojected into 3D space. Road detection and depth features isolate point clouds representing the objects to construct the occupancy map of the environment. A convolutional neural network was trained to classify typical urban objects in the color images. Experimental tests validate car and object pose in the occupancy map for different scenarios, reinforcing the car position visually estimated with GPS measurements.\",\"PeriodicalId\":34482,\"journal\":{\"name\":\"Smart Cities\",\"volume\":\"2 4\",\"pages\":\"\"},\"PeriodicalIF\":7.0000,\"publicationDate\":\"2023-12-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Smart Cities\",\"FirstCategoryId\":\"1089\",\"ListUrlMain\":\"https://doi.org/10.3390/smartcities7010002\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Smart Cities","FirstCategoryId":"1089","ListUrlMain":"https://doi.org/10.3390/smartcities7010002","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

智能系统和尖端技术的不断进步极大地影响了智能汽车的发展。最近,将多个传感器集成到汽车中的先进驾驶辅助系统(ADAS)解决方案得到了改进和推广,从而实现了完全自动驾驶的目标。尽管目前已有自动驾驶方法和系统,但自动驾驶仍是一个开放性研究课题,必须保证驾驶员的安全和可靠性。这项研究利用两个摄像头的图像和全球定位系统(GPS)数据,提出了一种基于三维视觉的物体定位和分类方法,用于在驾驶过程中辅助汽车。实验平台是一个双坐标电动汽车原型,设计和组装用于在受控移动条件下在校园内导航。同时,结合主摄像头的彩色和深度图像提取二维特征,并将其重塑到三维空间中。道路检测和深度特征分离出代表物体的点云,从而构建环境的占用图。对卷积神经网络进行了训练,以对彩色图像中的典型城市物体进行分类。实验测试验证了不同场景下占用图中汽车和物体的姿态,加强了通过 GPS 测量目测的汽车位置。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Vision-Based Object Localization and Classification for Electric Vehicle Driving Assistance
The continuous advances in intelligent systems and cutting-edge technology have greatly influenced the development of intelligent vehicles. Recently, integrating multiple sensors in cars has improved and spread the advanced drive-assistance systems (ADAS) solutions for achieving the goal of total autonomy. Despite current self-driving approaches and systems, autonomous driving is still an open research issue that must guarantee the safety and reliability of drivers. This work employs images from two cameras and Global Positioning System (GPS) data to propose a 3D vision-based object localization and classification method for assisting a car during driving. The experimental platform is a prototype of a two-sitter electric vehicle designed and assembled for navigating the campus under controlled mobility conditions. Simultaneously, color and depth images from the primary camera are combined to extract 2D features, which are reprojected into 3D space. Road detection and depth features isolate point clouds representing the objects to construct the occupancy map of the environment. A convolutional neural network was trained to classify typical urban objects in the color images. Experimental tests validate car and object pose in the occupancy map for different scenarios, reinforcing the car position visually estimated with GPS measurements.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Smart Cities
Smart Cities Multiple-
CiteScore
11.20
自引率
6.20%
发文量
0
审稿时长
11 weeks
期刊介绍: Smart Cities (ISSN 2624-6511) provides an advanced forum for the dissemination of information on the science and technology of smart cities, publishing reviews, regular research papers (articles) and communications in all areas of research concerning smart cities. Our aim is to encourage scientists to publish their experimental and theoretical results in as much detail as possible, with no restriction on the maximum length of the papers published so that all experimental results can be reproduced.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信