Daisuke Abe, E. Segawa, Osafumi Nakayama, M. Shiohara, S. Sasaki, Nobuyuki Sugano, H. Kanno
{"title":"Robust Small-Object Detection for Outdoor Wide-Area Surveillance","authors":"Daisuke Abe, E. Segawa, Osafumi Nakayama, M. Shiohara, S. Sasaki, Nobuyuki Sugano, H. Kanno","doi":"10.1093/ietisy/e91-d.7.1922","DOIUrl":null,"url":null,"abstract":"In this paper, we present a robust small-object detection method, which we call “Frequency Pattern Emphasis Subtraction (FPES)”, for wide-area surveillance such as that of harbors, rivers, and plant premises. For achieving robust detection under changes in environmental conditions, such as illuminance level, weather, and camera vibration, our method distinguishes target objects from background and noise based on the differences in frequency components between them. The evaluation results demonstrate that our method detected more than 95% of target objects in the images of large surveillance areas ranging from 30–75 meters at their center.","PeriodicalId":295384,"journal":{"name":"IAPR International Workshop on Machine Vision Applications","volume":"53 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IAPR International Workshop on Machine Vision Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/ietisy/e91-d.7.1922","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
In this paper, we present a robust small-object detection method, which we call “Frequency Pattern Emphasis Subtraction (FPES)”, for wide-area surveillance such as that of harbors, rivers, and plant premises. For achieving robust detection under changes in environmental conditions, such as illuminance level, weather, and camera vibration, our method distinguishes target objects from background and noise based on the differences in frequency components between them. The evaluation results demonstrate that our method detected more than 95% of target objects in the images of large surveillance areas ranging from 30–75 meters at their center.