{"title":"Cityscapes TL++: Semantic Traffic Light Annotations for the Cityscapes Dataset","authors":"Johannes Janosovits","doi":"10.1109/icra46639.2022.9812144","DOIUrl":null,"url":null,"abstract":"There is a gap in holistic urban scene understanding between multi-modal datasets for segmentation and object detection on the one hand and traffic light datasets on the other hand. The role of traffic lights in the former is not labelled, making it difficult to use them for higher-level tasks and leave critical information of an intersection scene blank. Including traffic lights from traffic light specific datasets into the comprehensive semantic data introduces a penalty from the domain shift. We close this gap by providing semantically annotated traffic lights for the Cityscapes dataset. We demonstrate the domain shift penalty by using a traffic light dataset from a similar domain and show superior performance on data labelled in the original domain. We demonstrate an application by training a real-time capable network for semantic segmentation and object detection which can now additionally make sense of traffic lights, delivering an F1- Score of 66.4% on the important class of traffic lights relevant to the ego vehicle. The network is made publicly available at https://github.com/joeda/NNAD and the data at https://github.com/KIT-MRT/cityscapes-t1.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"63 1-2","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Robotics and Automation (ICRA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icra46639.2022.9812144","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
There is a gap in holistic urban scene understanding between multi-modal datasets for segmentation and object detection on the one hand and traffic light datasets on the other hand. The role of traffic lights in the former is not labelled, making it difficult to use them for higher-level tasks and leave critical information of an intersection scene blank. Including traffic lights from traffic light specific datasets into the comprehensive semantic data introduces a penalty from the domain shift. We close this gap by providing semantically annotated traffic lights for the Cityscapes dataset. We demonstrate the domain shift penalty by using a traffic light dataset from a similar domain and show superior performance on data labelled in the original domain. We demonstrate an application by training a real-time capable network for semantic segmentation and object detection which can now additionally make sense of traffic lights, delivering an F1- Score of 66.4% on the important class of traffic lights relevant to the ego vehicle. The network is made publicly available at https://github.com/joeda/NNAD and the data at https://github.com/KIT-MRT/cityscapes-t1.