优化自动驾驶图像采集系统

H. Blasinski, J. Farrell, Trisha Lian, Zhenyi Liu, B. Wandell
{"title":"优化自动驾驶图像采集系统","authors":"H. Blasinski, J. Farrell, Trisha Lian, Zhenyi Liu, B. Wandell","doi":"10.2352/ISSN.2470-1173.2018.05.PMII-161","DOIUrl":null,"url":null,"abstract":"Task requirements for image acquisition systems vary substantially between applications: requirements for consumer photography may be irrelevant or may even interfere with requirements for automotive, medical and other applications. The remarkable capabilities of the imaging industry to create lens and sensor designs for specific applications has been demonstrated in the mobile computing market. We might expect that the industry can further innovate if we specify the requirements for other markets. This paper explains an approach to developing image system designs that meet the task requirements for autonomous vehicle applications. It is impractical to build a large number of image acquisition systems and evaluate each of them with real driving data; therefore, we assembled a simulation environment to provide guidance at an early stage. The open-source and freely available software (isetcam, iset3d, and isetauto) uses ray tracing to compute quantitatively how scene radiance propagates through a multi-element lens to form the sensor irradiance. The software then transforms the irradiance into the sensor pixel responses, accounting for a large number of sensor parameters. This enables the user to apply different types of image processing pipelines to generate images that are used to train and test convolutional networks used in autonomous driving. We use the simulation environment to assess performance for different cameras and networks. Introduction The market for image sensors in autonomous vehicles can be divided into two segments. Some image sensor data is used as images to the passengers, such as rendering views from behind the car as the driver backs up. Other image sensor data is used by computational algorithms that guide the vehicle; the output from these sensors is never rendered for the human eye. It is reasonable to expect that the optical design, sensor parameters, and image processing pipeline for these two systems will differ. Mobile imaging applications for consumer photography dominate the market, driving the industry towards sensors with very small pixels (1 micron), a large number of pixels, a Bayer color filter array, and an infrared cutoff filter. There is a nascent market for image sensors for autonomous vehicle decision-system applications, and the most desirable features for such applications are not yet settled. The current offerings include sensors with larger pixels, a color filter array that comprises one quarter red filters and three quarters clear filters, and no infrared cutoff filter (e.g. ON Semiconductor; Omnivision). The requirements for optical properties, such as depth of field effects, may also differ between consumer photography and autonomous vehicles. Consumer photography values narrow depth of field images (bokeh), while autonomous driving value large depth of field to support Lens","PeriodicalId":309050,"journal":{"name":"Photography, Mobile, and Immersive Imaging","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"25","resultStr":"{\"title\":\"Optimizing Image Acquisition Systems for Autonomous Driving\",\"authors\":\"H. Blasinski, J. Farrell, Trisha Lian, Zhenyi Liu, B. Wandell\",\"doi\":\"10.2352/ISSN.2470-1173.2018.05.PMII-161\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Task requirements for image acquisition systems vary substantially between applications: requirements for consumer photography may be irrelevant or may even interfere with requirements for automotive, medical and other applications. The remarkable capabilities of the imaging industry to create lens and sensor designs for specific applications has been demonstrated in the mobile computing market. We might expect that the industry can further innovate if we specify the requirements for other markets. This paper explains an approach to developing image system designs that meet the task requirements for autonomous vehicle applications. It is impractical to build a large number of image acquisition systems and evaluate each of them with real driving data; therefore, we assembled a simulation environment to provide guidance at an early stage. The open-source and freely available software (isetcam, iset3d, and isetauto) uses ray tracing to compute quantitatively how scene radiance propagates through a multi-element lens to form the sensor irradiance. The software then transforms the irradiance into the sensor pixel responses, accounting for a large number of sensor parameters. This enables the user to apply different types of image processing pipelines to generate images that are used to train and test convolutional networks used in autonomous driving. We use the simulation environment to assess performance for different cameras and networks. Introduction The market for image sensors in autonomous vehicles can be divided into two segments. Some image sensor data is used as images to the passengers, such as rendering views from behind the car as the driver backs up. Other image sensor data is used by computational algorithms that guide the vehicle; the output from these sensors is never rendered for the human eye. It is reasonable to expect that the optical design, sensor parameters, and image processing pipeline for these two systems will differ. Mobile imaging applications for consumer photography dominate the market, driving the industry towards sensors with very small pixels (1 micron), a large number of pixels, a Bayer color filter array, and an infrared cutoff filter. There is a nascent market for image sensors for autonomous vehicle decision-system applications, and the most desirable features for such applications are not yet settled. The current offerings include sensors with larger pixels, a color filter array that comprises one quarter red filters and three quarters clear filters, and no infrared cutoff filter (e.g. ON Semiconductor; Omnivision). The requirements for optical properties, such as depth of field effects, may also differ between consumer photography and autonomous vehicles. Consumer photography values narrow depth of field images (bokeh), while autonomous driving value large depth of field to support Lens\",\"PeriodicalId\":309050,\"journal\":{\"name\":\"Photography, Mobile, and Immersive Imaging\",\"volume\":\"25 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-01-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"25\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Photography, Mobile, and Immersive Imaging\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2352/ISSN.2470-1173.2018.05.PMII-161\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Photography, Mobile, and Immersive Imaging","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2352/ISSN.2470-1173.2018.05.PMII-161","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 25

摘要

不同应用对图像采集系统的任务要求差异很大:消费者摄影的要求可能不相关,甚至可能干扰汽车、医疗和其他应用的要求。成像行业为特定应用创造镜头和传感器设计的卓越能力已经在移动计算市场得到了证明。我们可以预期,如果我们明确了其他市场的要求,这个行业可以进一步创新。本文解释了一种开发图像系统设计的方法,以满足自动驾驶汽车应用的任务要求。构建大量的图像采集系统并用真实的驾驶数据对每个系统进行评估是不切实际的;因此,我们组装了一个模拟环境,以便在早期阶段提供指导。开源和免费软件(isetcam, iset3d和isetauto)使用光线追踪来定量计算场景辐射如何通过多元素镜头传播以形成传感器辐照度。然后,该软件将辐照度转换为传感器像素响应,占大量传感器参数。这使得用户能够应用不同类型的图像处理管道来生成用于训练和测试自动驾驶中使用的卷积网络的图像。我们使用模拟环境来评估不同摄像机和网络的性能。自动驾驶汽车的图像传感器市场可以分为两个部分。一些图像传感器数据被用作乘客的图像,例如在驾驶员倒车时渲染汽车后方的视图。其他图像传感器数据由引导车辆的计算算法使用;这些传感器的输出从来没有呈现给人眼。可以合理地预期,这两个系统的光学设计、传感器参数和图像处理管道将有所不同。消费类摄影的移动成像应用占据了市场主导地位,推动了该行业向像素非常小(1微米)、大量像素、拜耳彩色滤光片阵列和红外截止滤光片的方向发展。用于自动驾驶汽车决策系统应用的图像传感器市场尚处于萌芽阶段,而此类应用最理想的功能尚未确定。目前的产品包括具有更大像素的传感器,包含四分之一红色滤波器和四分之三透明滤波器的彩色滤波器阵列,以及无红外截止滤波器(例如安森美半导体;Omnivision)。消费者摄影和自动驾驶汽车对光学特性(如景深效果)的要求也可能有所不同。消费者摄影看重窄景深图像(散景),而自动驾驶看重大景深以支持镜头
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Optimizing Image Acquisition Systems for Autonomous Driving
Task requirements for image acquisition systems vary substantially between applications: requirements for consumer photography may be irrelevant or may even interfere with requirements for automotive, medical and other applications. The remarkable capabilities of the imaging industry to create lens and sensor designs for specific applications has been demonstrated in the mobile computing market. We might expect that the industry can further innovate if we specify the requirements for other markets. This paper explains an approach to developing image system designs that meet the task requirements for autonomous vehicle applications. It is impractical to build a large number of image acquisition systems and evaluate each of them with real driving data; therefore, we assembled a simulation environment to provide guidance at an early stage. The open-source and freely available software (isetcam, iset3d, and isetauto) uses ray tracing to compute quantitatively how scene radiance propagates through a multi-element lens to form the sensor irradiance. The software then transforms the irradiance into the sensor pixel responses, accounting for a large number of sensor parameters. This enables the user to apply different types of image processing pipelines to generate images that are used to train and test convolutional networks used in autonomous driving. We use the simulation environment to assess performance for different cameras and networks. Introduction The market for image sensors in autonomous vehicles can be divided into two segments. Some image sensor data is used as images to the passengers, such as rendering views from behind the car as the driver backs up. Other image sensor data is used by computational algorithms that guide the vehicle; the output from these sensors is never rendered for the human eye. It is reasonable to expect that the optical design, sensor parameters, and image processing pipeline for these two systems will differ. Mobile imaging applications for consumer photography dominate the market, driving the industry towards sensors with very small pixels (1 micron), a large number of pixels, a Bayer color filter array, and an infrared cutoff filter. There is a nascent market for image sensors for autonomous vehicle decision-system applications, and the most desirable features for such applications are not yet settled. The current offerings include sensors with larger pixels, a color filter array that comprises one quarter red filters and three quarters clear filters, and no infrared cutoff filter (e.g. ON Semiconductor; Omnivision). The requirements for optical properties, such as depth of field effects, may also differ between consumer photography and autonomous vehicles. Consumer photography values narrow depth of field images (bokeh), while autonomous driving value large depth of field to support Lens
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信