{"title":"Distributed sensor network data fusion using image processing","authors":"M. Elmusrati, R. Jäntti, H. Koivo","doi":"10.1109/ICW.2005.43","DOIUrl":null,"url":null,"abstract":"In this paper we discuss the analogy between spatial distributed sensor network analysis and image processing. The analogy comes from the fact that in high density sensor networks the output of sensors is correlated both spatially and temporally. This means that the output of a sensor is correlated with the outputs of its neighbours. This characteristic is very similar to the pixels' output (intensity) in video signals. The video signal consists of multiple correlated frames (correlation in time), and each frame consists of large number of pixels, and usually there is high correlation between pixels (spatial correlation). By defining this relation one can use the well-known image processing techniques for sensor data compression, fusion, and analysis. As an example we show how to use the quadtree image decomposition for sensor spatial decomposition.","PeriodicalId":255955,"journal":{"name":"2005 Systems Communications (ICW'05, ICHSN'05, ICMCS'05, SENET'05)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2005 Systems Communications (ICW'05, ICHSN'05, ICMCS'05, SENET'05)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICW.2005.43","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
In this paper we discuss the analogy between spatial distributed sensor network analysis and image processing. The analogy comes from the fact that in high density sensor networks the output of sensors is correlated both spatially and temporally. This means that the output of a sensor is correlated with the outputs of its neighbours. This characteristic is very similar to the pixels' output (intensity) in video signals. The video signal consists of multiple correlated frames (correlation in time), and each frame consists of large number of pixels, and usually there is high correlation between pixels (spatial correlation). By defining this relation one can use the well-known image processing techniques for sensor data compression, fusion, and analysis. As an example we show how to use the quadtree image decomposition for sensor spatial decomposition.