H. Blasinski, J. Farrell, Trisha Lian, Zhenyi Liu, B. Wandell
{"title":"Optimizing Image Acquisition Systems for Autonomous Driving","authors":"H. Blasinski, J. Farrell, Trisha Lian, Zhenyi Liu, B. Wandell","doi":"10.2352/ISSN.2470-1173.2018.05.PMII-161","DOIUrl":null,"url":null,"abstract":"Task requirements for image acquisition systems vary substantially between applications: requirements for consumer photography may be irrelevant or may even interfere with requirements for automotive, medical and other applications. The remarkable capabilities of the imaging industry to create lens and sensor designs for specific applications has been demonstrated in the mobile computing market. We might expect that the industry can further innovate if we specify the requirements for other markets. This paper explains an approach to developing image system designs that meet the task requirements for autonomous vehicle applications. It is impractical to build a large number of image acquisition systems and evaluate each of them with real driving data; therefore, we assembled a simulation environment to provide guidance at an early stage. The open-source and freely available software (isetcam, iset3d, and isetauto) uses ray tracing to compute quantitatively how scene radiance propagates through a multi-element lens to form the sensor irradiance. The software then transforms the irradiance into the sensor pixel responses, accounting for a large number of sensor parameters. This enables the user to apply different types of image processing pipelines to generate images that are used to train and test convolutional networks used in autonomous driving. We use the simulation environment to assess performance for different cameras and networks. Introduction The market for image sensors in autonomous vehicles can be divided into two segments. Some image sensor data is used as images to the passengers, such as rendering views from behind the car as the driver backs up. Other image sensor data is used by computational algorithms that guide the vehicle; the output from these sensors is never rendered for the human eye. It is reasonable to expect that the optical design, sensor parameters, and image processing pipeline for these two systems will differ. Mobile imaging applications for consumer photography dominate the market, driving the industry towards sensors with very small pixels (1 micron), a large number of pixels, a Bayer color filter array, and an infrared cutoff filter. There is a nascent market for image sensors for autonomous vehicle decision-system applications, and the most desirable features for such applications are not yet settled. The current offerings include sensors with larger pixels, a color filter array that comprises one quarter red filters and three quarters clear filters, and no infrared cutoff filter (e.g. ON Semiconductor; Omnivision). The requirements for optical properties, such as depth of field effects, may also differ between consumer photography and autonomous vehicles. Consumer photography values narrow depth of field images (bokeh), while autonomous driving value large depth of field to support Lens","PeriodicalId":309050,"journal":{"name":"Photography, Mobile, and Immersive Imaging","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"25","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Photography, Mobile, and Immersive Imaging","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2352/ISSN.2470-1173.2018.05.PMII-161","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 25
Abstract
Task requirements for image acquisition systems vary substantially between applications: requirements for consumer photography may be irrelevant or may even interfere with requirements for automotive, medical and other applications. The remarkable capabilities of the imaging industry to create lens and sensor designs for specific applications has been demonstrated in the mobile computing market. We might expect that the industry can further innovate if we specify the requirements for other markets. This paper explains an approach to developing image system designs that meet the task requirements for autonomous vehicle applications. It is impractical to build a large number of image acquisition systems and evaluate each of them with real driving data; therefore, we assembled a simulation environment to provide guidance at an early stage. The open-source and freely available software (isetcam, iset3d, and isetauto) uses ray tracing to compute quantitatively how scene radiance propagates through a multi-element lens to form the sensor irradiance. The software then transforms the irradiance into the sensor pixel responses, accounting for a large number of sensor parameters. This enables the user to apply different types of image processing pipelines to generate images that are used to train and test convolutional networks used in autonomous driving. We use the simulation environment to assess performance for different cameras and networks. Introduction The market for image sensors in autonomous vehicles can be divided into two segments. Some image sensor data is used as images to the passengers, such as rendering views from behind the car as the driver backs up. Other image sensor data is used by computational algorithms that guide the vehicle; the output from these sensors is never rendered for the human eye. It is reasonable to expect that the optical design, sensor parameters, and image processing pipeline for these two systems will differ. Mobile imaging applications for consumer photography dominate the market, driving the industry towards sensors with very small pixels (1 micron), a large number of pixels, a Bayer color filter array, and an infrared cutoff filter. There is a nascent market for image sensors for autonomous vehicle decision-system applications, and the most desirable features for such applications are not yet settled. The current offerings include sensors with larger pixels, a color filter array that comprises one quarter red filters and three quarters clear filters, and no infrared cutoff filter (e.g. ON Semiconductor; Omnivision). The requirements for optical properties, such as depth of field effects, may also differ between consumer photography and autonomous vehicles. Consumer photography values narrow depth of field images (bokeh), while autonomous driving value large depth of field to support Lens