{"title":"Distributed adaptive spectral and spatial sensor fusion for super-resolution classification","authors":"T. Khuon, R. Rand, J. Greer, E. Truslow","doi":"10.1109/AIPR.2012.6528194","DOIUrl":null,"url":null,"abstract":"A distributed architecture for adaptive sensor fusion (a multisensor fusion neural net) is introduced for 3D imagery data that makes use of a super-resolution technique computed with a Bregman-Iteration deconvolution algorithm. This architecture is a cascaded neural network, which consists of two levels of neural networks. The first level consists of sensor networks: two independent sensor neural nets, namely, a spatial neural net and spectral neural net. The second level is a fusion neural net, which contains a single neural net that combines the information from the sensor level. The inputs to the sensor networks are obtained from unsupervised spatial and spectral segmentation algorithms that can be applied to the original imagery or imagery enhanced by a proposed super-resolution process. Spatial segmentation is obtained by a mean-shift method and spectral segmentation is obtained by a Stochastic Expectation Maximization method. The decision outputs from the sensor nets are used to train the fusion net to a specific overall decision. The overall approach is tested with an experiment involving a multi-sensor airborne collection of LIDAR and Hyperspectral data over a university campus in Gulfport MS. The success of the system in utilizing sensor synergism for an enhanced classification is clearly demonstrated. The final class map contains the geographical classes as well as the signature classes.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"515 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIPR.2012.6528194","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
A distributed architecture for adaptive sensor fusion (a multisensor fusion neural net) is introduced for 3D imagery data that makes use of a super-resolution technique computed with a Bregman-Iteration deconvolution algorithm. This architecture is a cascaded neural network, which consists of two levels of neural networks. The first level consists of sensor networks: two independent sensor neural nets, namely, a spatial neural net and spectral neural net. The second level is a fusion neural net, which contains a single neural net that combines the information from the sensor level. The inputs to the sensor networks are obtained from unsupervised spatial and spectral segmentation algorithms that can be applied to the original imagery or imagery enhanced by a proposed super-resolution process. Spatial segmentation is obtained by a mean-shift method and spectral segmentation is obtained by a Stochastic Expectation Maximization method. The decision outputs from the sensor nets are used to train the fusion net to a specific overall decision. The overall approach is tested with an experiment involving a multi-sensor airborne collection of LIDAR and Hyperspectral data over a university campus in Gulfport MS. The success of the system in utilizing sensor synergism for an enhanced classification is clearly demonstrated. The final class map contains the geographical classes as well as the signature classes.