{"title":"无线嵌入式智能摄像头跨多个摄像头视图的复合事件检测","authors":"Youlu Wang, Senem Velipasalar, Mauricio Casares","doi":"10.1109/ICDSC.2009.5289355","DOIUrl":null,"url":null,"abstract":"With the introduction of battery-powered and embedded smart cameras, it has become viable to install many spatially-distributed cameras interconnected by wireless links. However, there are many problems that need to be solved to build scalable, battery-powered wireless smart-camera networks (Wi-SCaNs). These problems include the limited processing power, memory, energy and bandwidth. Limited resources necessitate light-weight algorithms to be implemented and run on the embedded cameras, and also careful choice of when and what data to transfer. We present a wireless embedded smart camera system, wherein each camera platform consists of a camera board and a wireless mote, and cameras communicate in a peer-to-peer manner over wireless links. Light-weight background subtraction and tracking algorithms are implemented and run on camera boards. Cameras exchange data to track objects consistently, and also to update locations of lost objects. Since frequent transfer of large-sized data requires more power and incurs more communication delay, transferring all captured frames to a server should be avoided. Another challenge is the limited local memory for storage in camera motes. Thus, instead of transferring or saving every frame or every trajectory, there should be a mechanism to detect events of interest. In the presented system, events of interest can be defined beforehand, and simpler events can be combined in a sequence to define semantically higher-level and composite events. Moreover, event scenarios can span multiple camera views, which make the definition of more complex events possible. Cameras communicate with each other about the portions of a scenario to detect an event that spans different camera views. We present examples of label transfer for consistent tracking, and of updating the location of occluded or lost objects from other cameras by wirelessly exchanging small-sized packets. We also show examples of detecting different composite and spatio-temporal event scenarios spanning multiple camera views. All the processing is performed on the camera boards.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Detection of composite events spanning multiple camera views with wireless embedded smart cameras\",\"authors\":\"Youlu Wang, Senem Velipasalar, Mauricio Casares\",\"doi\":\"10.1109/ICDSC.2009.5289355\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the introduction of battery-powered and embedded smart cameras, it has become viable to install many spatially-distributed cameras interconnected by wireless links. However, there are many problems that need to be solved to build scalable, battery-powered wireless smart-camera networks (Wi-SCaNs). These problems include the limited processing power, memory, energy and bandwidth. Limited resources necessitate light-weight algorithms to be implemented and run on the embedded cameras, and also careful choice of when and what data to transfer. We present a wireless embedded smart camera system, wherein each camera platform consists of a camera board and a wireless mote, and cameras communicate in a peer-to-peer manner over wireless links. Light-weight background subtraction and tracking algorithms are implemented and run on camera boards. Cameras exchange data to track objects consistently, and also to update locations of lost objects. Since frequent transfer of large-sized data requires more power and incurs more communication delay, transferring all captured frames to a server should be avoided. Another challenge is the limited local memory for storage in camera motes. Thus, instead of transferring or saving every frame or every trajectory, there should be a mechanism to detect events of interest. In the presented system, events of interest can be defined beforehand, and simpler events can be combined in a sequence to define semantically higher-level and composite events. Moreover, event scenarios can span multiple camera views, which make the definition of more complex events possible. Cameras communicate with each other about the portions of a scenario to detect an event that spans different camera views. We present examples of label transfer for consistent tracking, and of updating the location of occluded or lost objects from other cameras by wirelessly exchanging small-sized packets. We also show examples of detecting different composite and spatio-temporal event scenarios spanning multiple camera views. All the processing is performed on the camera boards.\",\"PeriodicalId\":324810,\"journal\":{\"name\":\"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2009-10-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICDSC.2009.5289355\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDSC.2009.5289355","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Detection of composite events spanning multiple camera views with wireless embedded smart cameras
With the introduction of battery-powered and embedded smart cameras, it has become viable to install many spatially-distributed cameras interconnected by wireless links. However, there are many problems that need to be solved to build scalable, battery-powered wireless smart-camera networks (Wi-SCaNs). These problems include the limited processing power, memory, energy and bandwidth. Limited resources necessitate light-weight algorithms to be implemented and run on the embedded cameras, and also careful choice of when and what data to transfer. We present a wireless embedded smart camera system, wherein each camera platform consists of a camera board and a wireless mote, and cameras communicate in a peer-to-peer manner over wireless links. Light-weight background subtraction and tracking algorithms are implemented and run on camera boards. Cameras exchange data to track objects consistently, and also to update locations of lost objects. Since frequent transfer of large-sized data requires more power and incurs more communication delay, transferring all captured frames to a server should be avoided. Another challenge is the limited local memory for storage in camera motes. Thus, instead of transferring or saving every frame or every trajectory, there should be a mechanism to detect events of interest. In the presented system, events of interest can be defined beforehand, and simpler events can be combined in a sequence to define semantically higher-level and composite events. Moreover, event scenarios can span multiple camera views, which make the definition of more complex events possible. Cameras communicate with each other about the portions of a scenario to detect an event that spans different camera views. We present examples of label transfer for consistent tracking, and of updating the location of occluded or lost objects from other cameras by wirelessly exchanging small-sized packets. We also show examples of detecting different composite and spatio-temporal event scenarios spanning multiple camera views. All the processing is performed on the camera boards.