Andrew P. Sacco, D. Arutyunov, A. Gonzalez, W. McKinley, A. Kundu
{"title":"运动检测与跟踪的神经超定位与视觉敏锐度","authors":"Andrew P. Sacco, D. Arutyunov, A. Gonzalez, W. McKinley, A. Kundu","doi":"10.1145/3271553.3271601","DOIUrl":null,"url":null,"abstract":"This paper describes a visible passive/LIDAR superposition based navigation and tracking camera array for applications across many fields. This problem has three components: 1) design of a camera array for image acquisition over a wide field of view, 2) design and implementation using low-cost components, and 3) a new multi-tier target tracking algorithm. In the camera array, each camera channel has a standard field of view while the composite camera array field of coverage is wide enough to capture targets moving in three-dimensional space covering up to 4π steradians. Image information is collected by multiple camera channels of the full camera array over the entire field of coverage with multiple images collected at any instant of time. Such collection greatly helps long-term tracking which is a challenging task especially in an unknown environment due to the loss of image information from objects leaving a camera's field of view. Most tracking algorithms work on images taken by sensors not related to the algorithm. In this paper, the image array and tracking algorithm development and implementation are jointly developed for optimal performance by exploiting the data from multiple camera geometries. We describe the tracking algorithm and a simulation experiment to demonstrate how such imagery helps tracking in a noisy environment.","PeriodicalId":414782,"journal":{"name":"Proceedings of the 2nd International Conference on Vision, Image and Signal Processing","volume":"99 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Neural Super Position and Visual Acuity for Motion Detection and Tracking\",\"authors\":\"Andrew P. Sacco, D. Arutyunov, A. Gonzalez, W. McKinley, A. Kundu\",\"doi\":\"10.1145/3271553.3271601\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper describes a visible passive/LIDAR superposition based navigation and tracking camera array for applications across many fields. This problem has three components: 1) design of a camera array for image acquisition over a wide field of view, 2) design and implementation using low-cost components, and 3) a new multi-tier target tracking algorithm. In the camera array, each camera channel has a standard field of view while the composite camera array field of coverage is wide enough to capture targets moving in three-dimensional space covering up to 4π steradians. Image information is collected by multiple camera channels of the full camera array over the entire field of coverage with multiple images collected at any instant of time. Such collection greatly helps long-term tracking which is a challenging task especially in an unknown environment due to the loss of image information from objects leaving a camera's field of view. Most tracking algorithms work on images taken by sensors not related to the algorithm. In this paper, the image array and tracking algorithm development and implementation are jointly developed for optimal performance by exploiting the data from multiple camera geometries. We describe the tracking algorithm and a simulation experiment to demonstrate how such imagery helps tracking in a noisy environment.\",\"PeriodicalId\":414782,\"journal\":{\"name\":\"Proceedings of the 2nd International Conference on Vision, Image and Signal Processing\",\"volume\":\"99 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-08-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2nd International Conference on Vision, Image and Signal Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3271553.3271601\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2nd International Conference on Vision, Image and Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3271553.3271601","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Neural Super Position and Visual Acuity for Motion Detection and Tracking
This paper describes a visible passive/LIDAR superposition based navigation and tracking camera array for applications across many fields. This problem has three components: 1) design of a camera array for image acquisition over a wide field of view, 2) design and implementation using low-cost components, and 3) a new multi-tier target tracking algorithm. In the camera array, each camera channel has a standard field of view while the composite camera array field of coverage is wide enough to capture targets moving in three-dimensional space covering up to 4π steradians. Image information is collected by multiple camera channels of the full camera array over the entire field of coverage with multiple images collected at any instant of time. Such collection greatly helps long-term tracking which is a challenging task especially in an unknown environment due to the loss of image information from objects leaving a camera's field of view. Most tracking algorithms work on images taken by sensors not related to the algorithm. In this paper, the image array and tracking algorithm development and implementation are jointly developed for optimal performance by exploiting the data from multiple camera geometries. We describe the tracking algorithm and a simulation experiment to demonstrate how such imagery helps tracking in a noisy environment.