B. Heisele, H. Neef, W. Ritter, R. Schneider, G. Wanielik
{"title":"基于彩色视频和雷达数据融合的交通场景目标检测","authors":"B. Heisele, H. Neef, W. Ritter, R. Schneider, G. Wanielik","doi":"10.1109/ADFS.1996.581080","DOIUrl":null,"url":null,"abstract":"Object detection is one of the key functions in autonomous driving. For this purpose different sensor types-such as laser or millimeter-wave (MMW) radar-are in use but most systems are solely based on vision (Thomanek et al., 1994). In contrast, this paper presents a data fusion approach for joint radar video object detection.","PeriodicalId":254509,"journal":{"name":"Proceeding of 1st Australian Data Fusion Symposium","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1996-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Object detection in traffic scenes by a colour video and radar data fusion approach\",\"authors\":\"B. Heisele, H. Neef, W. Ritter, R. Schneider, G. Wanielik\",\"doi\":\"10.1109/ADFS.1996.581080\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Object detection is one of the key functions in autonomous driving. For this purpose different sensor types-such as laser or millimeter-wave (MMW) radar-are in use but most systems are solely based on vision (Thomanek et al., 1994). In contrast, this paper presents a data fusion approach for joint radar video object detection.\",\"PeriodicalId\":254509,\"journal\":{\"name\":\"Proceeding of 1st Australian Data Fusion Symposium\",\"volume\":\"2 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1996-11-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceeding of 1st Australian Data Fusion Symposium\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ADFS.1996.581080\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceeding of 1st Australian Data Fusion Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ADFS.1996.581080","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
摘要
目标检测是自动驾驶的关键功能之一。为此目的,使用了不同类型的传感器,如激光或毫米波(MMW)雷达,但大多数系统仅基于视觉(Thomanek et al., 1994)。本文提出了一种用于联合雷达视频目标检测的数据融合方法。
Object detection in traffic scenes by a colour video and radar data fusion approach
Object detection is one of the key functions in autonomous driving. For this purpose different sensor types-such as laser or millimeter-wave (MMW) radar-are in use but most systems are solely based on vision (Thomanek et al., 1994). In contrast, this paper presents a data fusion approach for joint radar video object detection.