Neural Super Position and Visual Acuity for Motion Detection and Tracking

Andrew P. Sacco, D. Arutyunov, A. Gonzalez, W. McKinley, A. Kundu
{"title":"Neural Super Position and Visual Acuity for Motion Detection and Tracking","authors":"Andrew P. Sacco, D. Arutyunov, A. Gonzalez, W. McKinley, A. Kundu","doi":"10.1145/3271553.3271601","DOIUrl":null,"url":null,"abstract":"This paper describes a visible passive/LIDAR superposition based navigation and tracking camera array for applications across many fields. This problem has three components: 1) design of a camera array for image acquisition over a wide field of view, 2) design and implementation using low-cost components, and 3) a new multi-tier target tracking algorithm. In the camera array, each camera channel has a standard field of view while the composite camera array field of coverage is wide enough to capture targets moving in three-dimensional space covering up to 4π steradians. Image information is collected by multiple camera channels of the full camera array over the entire field of coverage with multiple images collected at any instant of time. Such collection greatly helps long-term tracking which is a challenging task especially in an unknown environment due to the loss of image information from objects leaving a camera's field of view. Most tracking algorithms work on images taken by sensors not related to the algorithm. In this paper, the image array and tracking algorithm development and implementation are jointly developed for optimal performance by exploiting the data from multiple camera geometries. We describe the tracking algorithm and a simulation experiment to demonstrate how such imagery helps tracking in a noisy environment.","PeriodicalId":414782,"journal":{"name":"Proceedings of the 2nd International Conference on Vision, Image and Signal Processing","volume":"99 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2nd International Conference on Vision, Image and Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3271553.3271601","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper describes a visible passive/LIDAR superposition based navigation and tracking camera array for applications across many fields. This problem has three components: 1) design of a camera array for image acquisition over a wide field of view, 2) design and implementation using low-cost components, and 3) a new multi-tier target tracking algorithm. In the camera array, each camera channel has a standard field of view while the composite camera array field of coverage is wide enough to capture targets moving in three-dimensional space covering up to 4π steradians. Image information is collected by multiple camera channels of the full camera array over the entire field of coverage with multiple images collected at any instant of time. Such collection greatly helps long-term tracking which is a challenging task especially in an unknown environment due to the loss of image information from objects leaving a camera's field of view. Most tracking algorithms work on images taken by sensors not related to the algorithm. In this paper, the image array and tracking algorithm development and implementation are jointly developed for optimal performance by exploiting the data from multiple camera geometries. We describe the tracking algorithm and a simulation experiment to demonstrate how such imagery helps tracking in a noisy environment.
运动检测与跟踪的神经超定位与视觉敏锐度
本文介绍了一种基于可见无源/激光雷达叠加的导航跟踪相机阵列,可用于多个领域的应用。该问题包括三个部分:1)设计用于大视场图像采集的相机阵列;2)使用低成本组件的设计和实现;3)一种新的多层目标跟踪算法。在相机阵列中,每个相机通道都有一个标准视场,而复合相机阵列的覆盖视场足够宽,可以捕获在三维空间中移动的目标,覆盖范围可达4π立体体。图像信息由全相机阵列的多个相机通道在整个覆盖范围内采集,任意时刻采集多幅图像。这种收集极大地帮助了长期跟踪,这是一项具有挑战性的任务,特别是在未知环境中,由于物体离开相机视野的图像信息丢失。大多数跟踪算法工作在与算法无关的传感器拍摄的图像上。本文通过利用多相机几何数据,共同开发了图像阵列和跟踪算法的开发与实现,以获得最佳性能。我们描述了跟踪算法和模拟实验,以证明这种图像如何帮助在嘈杂环境中跟踪。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信