A. Solé, O. Mano, G. Stein, H. Kumon, Y. Tamatsu, A. Shashua
{"title":"Solid or not solid: vision for radar target validation","authors":"A. Solé, O. Mano, G. Stein, H. Kumon, Y. Tamatsu, A. Shashua","doi":"10.1109/IVS.2004.1336490","DOIUrl":null,"url":null,"abstract":"In the context of combining radar and vision sensors for a fusion application in dense city traffic situations, one of the major challenges is to be able to validate radar targets. We take a high-level fusion approach assuming that both sensor modalities have the capacity to independently locate and identify targets of interest. In this context, radar targets can either correspond to a vision target- in which case the target is validated without further processing- or not. It is the latter case that drives the focus of this paper. A non-matched radar target can correspond to some solid object which is not part of the objects of interest of the vision sensor (such as a guard-rail) or can be caused by reflections in which case it is a ghost target which does not match any physical object in the real world. We describe a number of computational steps for the decision making of non-matched radar targets. The computations combine both direct motion parallax measurements and indirect motion analysis- which are not sufficient for computing parallax but are nevertheless quite effective- and pattern classification steps for covering situations which motion analysis are weak or ineffective. One of the major advantages of our high-level fusion approach is that it allows the use of simpler (low cost) radar technology to create a combined high performance system.","PeriodicalId":296386,"journal":{"name":"IEEE Intelligent Vehicles Symposium, 2004","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2004-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"55","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Intelligent Vehicles Symposium, 2004","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IVS.2004.1336490","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 55
Abstract
In the context of combining radar and vision sensors for a fusion application in dense city traffic situations, one of the major challenges is to be able to validate radar targets. We take a high-level fusion approach assuming that both sensor modalities have the capacity to independently locate and identify targets of interest. In this context, radar targets can either correspond to a vision target- in which case the target is validated without further processing- or not. It is the latter case that drives the focus of this paper. A non-matched radar target can correspond to some solid object which is not part of the objects of interest of the vision sensor (such as a guard-rail) or can be caused by reflections in which case it is a ghost target which does not match any physical object in the real world. We describe a number of computational steps for the decision making of non-matched radar targets. The computations combine both direct motion parallax measurements and indirect motion analysis- which are not sufficient for computing parallax but are nevertheless quite effective- and pattern classification steps for covering situations which motion analysis are weak or ineffective. One of the major advantages of our high-level fusion approach is that it allows the use of simpler (low cost) radar technology to create a combined high performance system.