Alfredo Medina-Garcia, Jonathan Duarte-Jasso, J. Cardenas-Cornejo, Yair A. Andrade-Ambriz, Marco-Antonio Garcia-Montoya, M. Ibarra-Manzano, Dora Almanza-Ojeda
{"title":"基于视觉的电动汽车辅助驾驶物体定位与分类","authors":"Alfredo Medina-Garcia, Jonathan Duarte-Jasso, J. Cardenas-Cornejo, Yair A. Andrade-Ambriz, Marco-Antonio Garcia-Montoya, M. Ibarra-Manzano, Dora Almanza-Ojeda","doi":"10.3390/smartcities7010002","DOIUrl":null,"url":null,"abstract":"The continuous advances in intelligent systems and cutting-edge technology have greatly influenced the development of intelligent vehicles. Recently, integrating multiple sensors in cars has improved and spread the advanced drive-assistance systems (ADAS) solutions for achieving the goal of total autonomy. Despite current self-driving approaches and systems, autonomous driving is still an open research issue that must guarantee the safety and reliability of drivers. This work employs images from two cameras and Global Positioning System (GPS) data to propose a 3D vision-based object localization and classification method for assisting a car during driving. The experimental platform is a prototype of a two-sitter electric vehicle designed and assembled for navigating the campus under controlled mobility conditions. Simultaneously, color and depth images from the primary camera are combined to extract 2D features, which are reprojected into 3D space. Road detection and depth features isolate point clouds representing the objects to construct the occupancy map of the environment. A convolutional neural network was trained to classify typical urban objects in the color images. Experimental tests validate car and object pose in the occupancy map for different scenarios, reinforcing the car position visually estimated with GPS measurements.","PeriodicalId":34482,"journal":{"name":"Smart Cities","volume":"2 4","pages":""},"PeriodicalIF":7.0000,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Vision-Based Object Localization and Classification for Electric Vehicle Driving Assistance\",\"authors\":\"Alfredo Medina-Garcia, Jonathan Duarte-Jasso, J. Cardenas-Cornejo, Yair A. Andrade-Ambriz, Marco-Antonio Garcia-Montoya, M. Ibarra-Manzano, Dora Almanza-Ojeda\",\"doi\":\"10.3390/smartcities7010002\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The continuous advances in intelligent systems and cutting-edge technology have greatly influenced the development of intelligent vehicles. Recently, integrating multiple sensors in cars has improved and spread the advanced drive-assistance systems (ADAS) solutions for achieving the goal of total autonomy. Despite current self-driving approaches and systems, autonomous driving is still an open research issue that must guarantee the safety and reliability of drivers. This work employs images from two cameras and Global Positioning System (GPS) data to propose a 3D vision-based object localization and classification method for assisting a car during driving. The experimental platform is a prototype of a two-sitter electric vehicle designed and assembled for navigating the campus under controlled mobility conditions. Simultaneously, color and depth images from the primary camera are combined to extract 2D features, which are reprojected into 3D space. Road detection and depth features isolate point clouds representing the objects to construct the occupancy map of the environment. A convolutional neural network was trained to classify typical urban objects in the color images. Experimental tests validate car and object pose in the occupancy map for different scenarios, reinforcing the car position visually estimated with GPS measurements.\",\"PeriodicalId\":34482,\"journal\":{\"name\":\"Smart Cities\",\"volume\":\"2 4\",\"pages\":\"\"},\"PeriodicalIF\":7.0000,\"publicationDate\":\"2023-12-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Smart Cities\",\"FirstCategoryId\":\"1089\",\"ListUrlMain\":\"https://doi.org/10.3390/smartcities7010002\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Smart Cities","FirstCategoryId":"1089","ListUrlMain":"https://doi.org/10.3390/smartcities7010002","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Vision-Based Object Localization and Classification for Electric Vehicle Driving Assistance
The continuous advances in intelligent systems and cutting-edge technology have greatly influenced the development of intelligent vehicles. Recently, integrating multiple sensors in cars has improved and spread the advanced drive-assistance systems (ADAS) solutions for achieving the goal of total autonomy. Despite current self-driving approaches and systems, autonomous driving is still an open research issue that must guarantee the safety and reliability of drivers. This work employs images from two cameras and Global Positioning System (GPS) data to propose a 3D vision-based object localization and classification method for assisting a car during driving. The experimental platform is a prototype of a two-sitter electric vehicle designed and assembled for navigating the campus under controlled mobility conditions. Simultaneously, color and depth images from the primary camera are combined to extract 2D features, which are reprojected into 3D space. Road detection and depth features isolate point clouds representing the objects to construct the occupancy map of the environment. A convolutional neural network was trained to classify typical urban objects in the color images. Experimental tests validate car and object pose in the occupancy map for different scenarios, reinforcing the car position visually estimated with GPS measurements.
期刊介绍:
Smart Cities (ISSN 2624-6511) provides an advanced forum for the dissemination of information on the science and technology of smart cities, publishing reviews, regular research papers (articles) and communications in all areas of research concerning smart cities. Our aim is to encourage scientists to publish their experimental and theoretical results in as much detail as possible, with no restriction on the maximum length of the papers published so that all experimental results can be reproduced.