Autonomous Vehicles and Machines最新文献

筛选
英文 中文
End-to-end evaluation of practical video analytics systems for face detection and recognition 端到端评估实际视频分析系统的人脸检测和识别
Autonomous Vehicles and Machines Pub Date : 2023-01-16 DOI: 10.2352/EI.2023.35.16.AVM-111
Praneet Singh, E. Delp, A. Reibman
{"title":"End-to-end evaluation of practical video analytics systems for face detection and recognition","authors":"Praneet Singh, E. Delp, A. Reibman","doi":"10.2352/EI.2023.35.16.AVM-111","DOIUrl":"https://doi.org/10.2352/EI.2023.35.16.AVM-111","url":null,"abstract":"Practical video analytics systems that are deployed in bandwidth constrained environments like autonomous vehicles perform computer vision tasks such as face detection and recognition. In an end-to-end face analytics system, inputs are first compressed using popular video codecs like HEVC and then passed onto modules that perform face detection, alignment, and recognition sequentially. Typically, the modules of these systems are evaluated independently using task-specific imbalanced datasets that can misconstrue performance estimates. In this paper, we perform a thorough end-to-end evaluation of a face analytics system using a driving-specific dataset, which enables meaningful interpretations. We demonstrate how independent task evaluations, dataset imbalances, and inconsistent annotations can lead to incorrect system performance estimates. We propose strategies to create balanced evaluation subsets of our dataset and to make its annotations consistent across multiple analytics tasks and scenarios. We then evaluate the end-to-end system performance sequentially to account for task interdependencies. Our experiments show that our approach provides consistent, accurate, and interpretable estimates of the system's performance which is critical for real-world applications.","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123247742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
tRANSAC: Dynamic feature accumulation across time for stable online RANSAC model estimation in automotive applications tRANSAC:汽车应用中稳定在线RANSAC模型估计的动态特征随时间累积
Autonomous Vehicles and Machines Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.16.avm-110
Shimiao Li, Yang Song, Ruijiang Luo, Zhongyang Huang, Chengming Liu
{"title":"tRANSAC: Dynamic feature accumulation across time for stable online RANSAC model estimation in automotive applications","authors":"Shimiao Li, Yang Song, Ruijiang Luo, Zhongyang Huang, Chengming Liu","doi":"10.2352/ei.2023.35.16.avm-110","DOIUrl":"https://doi.org/10.2352/ei.2023.35.16.avm-110","url":null,"abstract":"RANdom SAmple Consensus (RANSAC) is widely used in computer vision and automotive related applications. It is an iterative method to estimate parameters of mathematical model from a set of observed data that contains outliers. In computer vision, such observed data is usually a set of features (such as feature points, line segments) extracted from images. In automotive re-lated applications, RANSAC can be used to estimate lane vanishing point, camera view angles, ground plane etc. In such applications, changing content of road scene makes stable online model estimation difficult. In this paper, we propose a framework called tRANSAC to dynamically accumulate features across time so that online RANSAC model estimation can be stably performed. Feature accumulation across time is done in such a dynamic way that when RANSAC tends to perform robustly and stably, accumulated features are discarded fast so that fewer redundant features are used for RANSAC estimation; when RANSAC tends to perform poorly, accumulated features are discarded slowly so that more features can be used for better RANSAC estimation. Experimental results on road scene dataset for vanishing point and camera angle estimation show that the proposed tRANSAC method gives more stable and accurate estimates compared to baseline RANSAC method.","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130333100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The influence of image capture and processing on MTF for end of line test and validation 图像捕获和处理对终端测试和验证MTF的影响
Autonomous Vehicles and Machines Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.16.avm-126
B. Deegan, Dara Molloy, Jordan Cahill, J. Horgan, Enda Ward, E. Jones, M. Glavin
{"title":"The influence of image capture and processing on MTF for end of line test and validation","authors":"B. Deegan, Dara Molloy, Jordan Cahill, J. Horgan, Enda Ward, E. Jones, M. Glavin","doi":"10.2352/ei.2023.35.16.avm-126","DOIUrl":"https://doi.org/10.2352/ei.2023.35.16.avm-126","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128532985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using simulation to quantify the performance of automotive perception systems 用仿真方法量化汽车感知系统的性能
Autonomous Vehicles and Machines Pub Date : 2023-01-16 DOI: 10.48550/arXiv.2303.00983
Zhenyi Liu, Devesh Shah, Alireza Rahimpour, D. Upadhyay, J. Farrell, B. Wandell
{"title":"Using simulation to quantify the performance of automotive perception systems","authors":"Zhenyi Liu, Devesh Shah, Alireza Rahimpour, D. Upadhyay, J. Farrell, B. Wandell","doi":"10.48550/arXiv.2303.00983","DOIUrl":"https://doi.org/10.48550/arXiv.2303.00983","url":null,"abstract":"The design and evaluation of complex systems can benefit from a software simulation - sometimes called a digital twin. The simulation can be used to characterize system performance or to test its performance under conditions that are difficult to measure (e.g., nighttime for automotive perception systems). We describe the image system simulation software tools that we use to evaluate the performance of image systems for object (automobile) detection. We describe experiments with 13 different cameras with a variety of optics and pixel sizes. To measure the impact of camera spatial resolution, we designed a collection of driving scenes that had cars at many different distances. We quantified system performance by measuring average precision and we report a trend relating system resolution and object detection performance. We also quantified the large performance degradation under nighttime conditions, compared to daytime, for all cameras and a COCO pre-trained network.","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":" 14","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114053319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comprehensive stray light (flare) testing: Lessons learned 综合杂散光(耀斑)测试:经验教训
Autonomous Vehicles and Machines Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.16.avm-127
Jackson S. Knappen
{"title":"Comprehensive stray light (flare) testing: Lessons learned","authors":"Jackson S. Knappen","doi":"10.2352/ei.2023.35.16.avm-127","DOIUrl":"https://doi.org/10.2352/ei.2023.35.16.avm-127","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121398283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design of an automotive platform for computer vision research 汽车计算机视觉研究平台的设计
Autonomous Vehicles and Machines Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.16.avm-119
Dominik Schörkhuber, R. Popp, Oleksandr Chistov, Fabian Windbacher, Michael Hödlmoser, M. Gelautz
{"title":"Design of an automotive platform for computer vision research","authors":"Dominik Schörkhuber, R. Popp, Oleksandr Chistov, Fabian Windbacher, Michael Hödlmoser, M. Gelautz","doi":"10.2352/ei.2023.35.16.avm-119","DOIUrl":"https://doi.org/10.2352/ei.2023.35.16.avm-119","url":null,"abstract":"The goal of our work is to design an automotive platform for AD/ADAS data acquisition in view of subsequent application to behaviour analysis of vulnerable road users. We present a novel data capture platform mounted on a Mercedes GLC vehicle. The car is equipped with an array of sensors and recording hardware including multiple RGB cameras, Lidar, GPS and IMU. For future research on human behaviour analysis in traffic scenes, we compile two kinds of data recordings. Firstly, we design a range of artificial test cases which we then record on a safety regulated proving ground with stunt persons to capture rare events in traffic scenes in a predictable and structured way. Secondly, we record data on public streets of Vienna, Austria, showing unconstrained pedestrian behaviour in an urban setting, while also considering European General Data Protection Regulation (GDPR) requirements. We describe the overall framework including the planning phase, data acquisition and ground truth annotation.","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129485058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MTF as a performance indicator for AI algorithms? MTF作为人工智能算法的性能指标?
Autonomous Vehicles and Machines Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.16.avm-125
Patrick Müller, Alexander Braun
{"title":"MTF as a performance indicator for AI algorithms?","authors":"Patrick Müller, Alexander Braun","doi":"10.2352/ei.2023.35.16.avm-125","DOIUrl":"https://doi.org/10.2352/ei.2023.35.16.avm-125","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123036868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Orchestration of co-operative and adaptive multi-core deep learning engines 协作和自适应多核深度学习引擎的编排
Autonomous Vehicles and Machines Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.16.avm-112
Mihir Mody, Kumar Desappan, P. Swami, David Smith, Shyam Jagannathan, Kevin Lavery, Gregory Shultz, Jason Jones
{"title":"Orchestration of co-operative and adaptive multi-core deep learning engines","authors":"Mihir Mody, Kumar Desappan, P. Swami, David Smith, Shyam Jagannathan, Kevin Lavery, Gregory Shultz, Jason Jones","doi":"10.2352/ei.2023.35.16.avm-112","DOIUrl":"https://doi.org/10.2352/ei.2023.35.16.avm-112","url":null,"abstract":"Automated driving functions, like highway driving and parking assist, are increasingly getting deployed in high-end cars with the goal of realizing self-driving car using Deep learning (DL) techniques like convolution neural network (CNN), Transformers etc. Deep learning (DL)-based algorithms are used in many integral modules of Advanced driver Assistance systems (ADAS) and Automated Driving Systems. Camera based perception, Driver Monitoring, Driving Policy, Radar and Lidar perception are few of the examples built using DL algorithms in such systems. These real-time DL applications requires huge compute requires up to 250 TOPs to realize them on an edge device. To meet the needs of such SoCs efficiently in-terms of Cost and Power silicon vendor provide a complex SoC with multiple DL engines. These SoCs also comes with all the system resources like L2/L3 on-chip memory, high speed DDR interface, PMIC etc to feed the data and power to utilize these DL engines compute efficiently. These system resource would scale linearly with number of DL engines in the system. This paper proposes solutions to optimizes these system resource to provide cost and Power efficient solution. (1) Co-operative and Adaptive asynchronous DL engines scheduling to optimize the peak resources usage in multiple vectors like memory size, throughput, Power/ Current. (2) Orchestration of Co-operative and Adaptive Multi-core DL Engines to achieve synchronous execution to achieve maximum utilization of all the resources.","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128443165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simulating motion blur and exposure time and evaluating its effect on image quality 模拟运动模糊和曝光时间,并评估其对图像质量的影响
Autonomous Vehicles and Machines Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.16.avm-117
Hao-Xiang Lin, B. Deegan, J. Horgan, Enda Ward, Patrick Denny, Ciarán Eising, M. Glavin, E. Jones
{"title":"Simulating motion blur and exposure time and evaluating its effect on image quality","authors":"Hao-Xiang Lin, B. Deegan, J. Horgan, Enda Ward, Patrick Denny, Ciarán Eising, M. Glavin, E. Jones","doi":"10.2352/ei.2023.35.16.avm-117","DOIUrl":"https://doi.org/10.2352/ei.2023.35.16.avm-117","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131475197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OpTIFlow - An optimized end-to-end dataflow for accelerating deep learning workloads on heterogeneous SoCs OpTIFlow -优化的端到端数据流,用于加速异构soc上的深度学习工作负载
Autonomous Vehicles and Machines Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.16.avm-113
Shyam Jagannathan, Vijay Pothukuchi, Jesse Villarreal, Kumar Desappan, Manu Mathew, Rahul Ravikumar, Aniket Limaye, Mihir Mody, P. Swami, Piyali Goswami, Carlos Rodriguez, Emmanuel Madrigal, Marco Herrera
{"title":"OpTIFlow - An optimized end-to-end dataflow for accelerating deep learning workloads on heterogeneous SoCs","authors":"Shyam Jagannathan, Vijay Pothukuchi, Jesse Villarreal, Kumar Desappan, Manu Mathew, Rahul Ravikumar, Aniket Limaye, Mihir Mody, P. Swami, Piyali Goswami, Carlos Rodriguez, Emmanuel Madrigal, Marco Herrera","doi":"10.2352/ei.2023.35.16.avm-113","DOIUrl":"https://doi.org/10.2352/ei.2023.35.16.avm-113","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"172 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123157691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信