Simon Genser;Stefan Muckenhuber;Christoph Gaisberger;Sarah Haas;Timo Haid
{"title":"遮挡模型——用于ADAS/AD功能虚拟测试的几何传感器建模方法","authors":"Simon Genser;Stefan Muckenhuber;Christoph Gaisberger;Sarah Haas;Timo Haid","doi":"10.1109/OJITS.2023.3283618","DOIUrl":null,"url":null,"abstract":"New advanced driver assistance system/automated driving (ADAS/AD) functions have the potential to significantly enhance the safety of vehicle passengers and road users, while also enabling new transportation applications and potentially reducing CO2 emissions. To achieve the next level of driving automation, i.e., SAE Level-3, physical test drives need to be supplemented by simulations in virtual test environments. A major challenge for today’s virtual test environments is to provide a realistic representation of the vehicle’s perception system (camera, lidar, radar). Therefore, new and improved sensor models are required to perform representative virtual tests that can supplement physical test drives. In this article, we present a computationally efficient, mathematically complete, and geometrically exact generic sensor modeling approach that solves the FOV (field of view) and occlusion task. We also discuss potential extensions, such as bounding-box cropping and sensor-specific, weather-dependent FOV-reduction approaches for camera, lidar, and radar. The performance of the new modeling approach is demonstrated using camera measurements from a test campaign conducted in Hungary in 2020 plus three artificial scenarios (a multi-target scenario with an adjacent truck occluding other road users and two traffic jam situations in which the ego vehicle is either a car or a truck). These scenarios are benchmarked against existing sensor modeling approaches that only exclude objects that are outside the sensor’s maximum detection range or angle. The modeling approach presented can be used as is or provide the basis for a more complex sensor model, as it reduces the number of potentially detectable targets and therefore improves the performance of subsequent simulation steps.","PeriodicalId":100631,"journal":{"name":"IEEE Open Journal of Intelligent Transportation Systems","volume":"4 ","pages":"439-455"},"PeriodicalIF":4.6000,"publicationDate":"2023-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8784355/9999144/10146003.pdf","citationCount":"0","resultStr":"{\"title\":\"Occlusion Model—A Geometric Sensor Modeling Approach for Virtual Testing of ADAS/AD Functions\",\"authors\":\"Simon Genser;Stefan Muckenhuber;Christoph Gaisberger;Sarah Haas;Timo Haid\",\"doi\":\"10.1109/OJITS.2023.3283618\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"New advanced driver assistance system/automated driving (ADAS/AD) functions have the potential to significantly enhance the safety of vehicle passengers and road users, while also enabling new transportation applications and potentially reducing CO2 emissions. To achieve the next level of driving automation, i.e., SAE Level-3, physical test drives need to be supplemented by simulations in virtual test environments. A major challenge for today’s virtual test environments is to provide a realistic representation of the vehicle’s perception system (camera, lidar, radar). Therefore, new and improved sensor models are required to perform representative virtual tests that can supplement physical test drives. In this article, we present a computationally efficient, mathematically complete, and geometrically exact generic sensor modeling approach that solves the FOV (field of view) and occlusion task. We also discuss potential extensions, such as bounding-box cropping and sensor-specific, weather-dependent FOV-reduction approaches for camera, lidar, and radar. The performance of the new modeling approach is demonstrated using camera measurements from a test campaign conducted in Hungary in 2020 plus three artificial scenarios (a multi-target scenario with an adjacent truck occluding other road users and two traffic jam situations in which the ego vehicle is either a car or a truck). These scenarios are benchmarked against existing sensor modeling approaches that only exclude objects that are outside the sensor’s maximum detection range or angle. The modeling approach presented can be used as is or provide the basis for a more complex sensor model, as it reduces the number of potentially detectable targets and therefore improves the performance of subsequent simulation steps.\",\"PeriodicalId\":100631,\"journal\":{\"name\":\"IEEE Open Journal of Intelligent Transportation Systems\",\"volume\":\"4 \",\"pages\":\"439-455\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2023-06-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/iel7/8784355/9999144/10146003.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Open Journal of Intelligent Transportation Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10146003/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of Intelligent Transportation Systems","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10146003/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Occlusion Model—A Geometric Sensor Modeling Approach for Virtual Testing of ADAS/AD Functions
New advanced driver assistance system/automated driving (ADAS/AD) functions have the potential to significantly enhance the safety of vehicle passengers and road users, while also enabling new transportation applications and potentially reducing CO2 emissions. To achieve the next level of driving automation, i.e., SAE Level-3, physical test drives need to be supplemented by simulations in virtual test environments. A major challenge for today’s virtual test environments is to provide a realistic representation of the vehicle’s perception system (camera, lidar, radar). Therefore, new and improved sensor models are required to perform representative virtual tests that can supplement physical test drives. In this article, we present a computationally efficient, mathematically complete, and geometrically exact generic sensor modeling approach that solves the FOV (field of view) and occlusion task. We also discuss potential extensions, such as bounding-box cropping and sensor-specific, weather-dependent FOV-reduction approaches for camera, lidar, and radar. The performance of the new modeling approach is demonstrated using camera measurements from a test campaign conducted in Hungary in 2020 plus three artificial scenarios (a multi-target scenario with an adjacent truck occluding other road users and two traffic jam situations in which the ego vehicle is either a car or a truck). These scenarios are benchmarked against existing sensor modeling approaches that only exclude objects that are outside the sensor’s maximum detection range or angle. The modeling approach presented can be used as is or provide the basis for a more complex sensor model, as it reduces the number of potentially detectable targets and therefore improves the performance of subsequent simulation steps.