{"title":"Human interaction analysis based on walking pattern transitions","authors":"H. Habe, Kazuhisa Honda, M. Kidode","doi":"10.1109/ICDSC.2009.5289357","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289357","url":null,"abstract":"We propose a method that analyzes interaction between pedestrians based on their trajectories obtained using sensors such as cameras. Our objective is to understand the mutual relationship between pedestrians and to detect anomalous events in a video sequence. Under such situations, we can observe the interaction between a pair of pedestrians. This paper proposes a set of features that measures the interaction between pedestrians. We assume that a person is likely to change his/her walking patterns when he/she has been influenced by another person. Based on this assumption, the proposed method first extracts the transition points of a walking pattern from trajectories of two pedestrians and then measures the strength of the influence using the temporal and spatial closeness between them. Finally, experimental results obtained from actual videos demonstrate the method's effectiveness in understating mutual relationships and detecting anomalous events.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116672909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Surveillance of robots using multiple colour or depth cameras with distributed processing","authors":"M. Fischer, D. Henrich","doi":"10.1109/ICDSC.2009.5289381","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289381","url":null,"abstract":"We introduce a general approach for surveillance of robots using depth images of multiple distributed smart cameras which can be either standard colour cameras or depth cameras. Unknown objects intruding the robot workcell are detected in the camera images and the minimum distance between the robot and all these detected unknown objects is calculated. The surveillance system is built as master-slave architecture with one slave per distributed camera. The level of distributed processing on the slaves (from pure image acquisition up to minimum distance calculation) controls the remaining computations on the master and thus the quality of approximation of the detected unknown objects. The calculated minimum distance and the asymptotic overall surveillance cycle time are evaluated in experiments.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129365925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yasir Mohd-Mustafah, A. Bigdeli, A. Azman, B. Lovell
{"title":"Face detection system design for real time high resolution smart camera","authors":"Yasir Mohd-Mustafah, A. Bigdeli, A. Azman, B. Lovell","doi":"10.1109/ICDSC.2009.5289346","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289346","url":null,"abstract":"Recognizing faces in a crowd in real-time is a key feature which would significantly enhance Intelligent Surveillance Systems. Using a smart camera as a tool to extract faces for recognition would greatly reduce the computational load on the main processing unit. Main processing unit would not be overloaded by the demands of the high data rates of the video and could be designed solely for face recognition. The challenge is with the increasing speed and resolution of the camera sensors, a fast and robust face detection system is required for real time operation. In this paper we report on a multiple-stage face detection system that is designed for implementation on an FPGA based high resolution smart camera system. The system consist of filter stages to greatly reduce the region of interest in video image, followed by a face detection stage to accurately locate the faces. For filter stage, the algorithm is designed to be very fast so that it can be processed in real time. Meanwhile, for face detection stage, a hardware and software co-design technique is utilised to accelerate it.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124748800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detection of composite events spanning multiple camera views with wireless embedded smart cameras","authors":"Youlu Wang, Senem Velipasalar, Mauricio Casares","doi":"10.1109/ICDSC.2009.5289355","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289355","url":null,"abstract":"With the introduction of battery-powered and embedded smart cameras, it has become viable to install many spatially-distributed cameras interconnected by wireless links. However, there are many problems that need to be solved to build scalable, battery-powered wireless smart-camera networks (Wi-SCaNs). These problems include the limited processing power, memory, energy and bandwidth. Limited resources necessitate light-weight algorithms to be implemented and run on the embedded cameras, and also careful choice of when and what data to transfer. We present a wireless embedded smart camera system, wherein each camera platform consists of a camera board and a wireless mote, and cameras communicate in a peer-to-peer manner over wireless links. Light-weight background subtraction and tracking algorithms are implemented and run on camera boards. Cameras exchange data to track objects consistently, and also to update locations of lost objects. Since frequent transfer of large-sized data requires more power and incurs more communication delay, transferring all captured frames to a server should be avoided. Another challenge is the limited local memory for storage in camera motes. Thus, instead of transferring or saving every frame or every trajectory, there should be a mechanism to detect events of interest. In the presented system, events of interest can be defined beforehand, and simpler events can be combined in a sequence to define semantically higher-level and composite events. Moreover, event scenarios can span multiple camera views, which make the definition of more complex events possible. Cameras communicate with each other about the portions of a scenario to detect an event that spans different camera views. We present examples of label transfer for consistent tracking, and of updating the location of occluded or lost objects from other cameras by wirelessly exchanging small-sized packets. We also show examples of detecting different composite and spatio-temporal event scenarios spanning multiple camera views. All the processing is performed on the camera boards.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114748629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philip Kelly, Ciarán Ó Conaire, Chanyul Kim, N. O’Connor
{"title":"Automatic camera selection for activity monitoring in a multi-camera system for tennis","authors":"Philip Kelly, Ciarán Ó Conaire, Chanyul Kim, N. O’Connor","doi":"10.1109/ICDSC.2009.5289353","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289353","url":null,"abstract":"In professional tennis training matches, the coach needs to be able to view play from the most appropriate angle in order to monitor players' activities. In this paper, we describe and evaluate a system for automatic camera selection from a network of synchronised cameras within a tennis sporting arena. This work combines synchronised video streams from multiple cameras into a single summary video suitable for critical review by both tennis players and coaches. Using an overhead camera view, our system automatically determines the 2D tennis-court calibration resulting in a mapping that relates a player's position in the overhead camera to their position and size in another camera view in the network. This allows the system to determine the appearance of a player in each of the other cameras and thereby choose the best view for each player via a novel technique. The video summaries are evaluated in end-user studies and shown to provide an efficient means of multi-stream visualisation for tennis player activity monitoring.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127983200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rodrigo Cilla, M. A. Patricio, A. Berlanga, J. M. Molina
{"title":"PhD forum: Non supervised learning of human activities in Visual Sensor Networks","authors":"Rodrigo Cilla, M. A. Patricio, A. Berlanga, J. M. Molina","doi":"10.1109/ICDSC.2009.5289391","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289391","url":null,"abstract":"We outline how Human Activity Recognition systems based on Dynamic Bayesian Networks using a single camera may be adapted to be used in Visual Sensor Networks. It is assumed that current activity generates independent observations on some cameras in the network. Then, the activity is inferred by the accumulation of the evidences provided by the observations gathered. At the same time, some activities never produce observations on some cameras. Baum-Welch algorithm is modified to deal with this situation, providing some examples of when it converges.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130444780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thiago Teixeira, Deokwoo Jung, G. Dublon, A. Savvides
{"title":"PEM-ID: Identifying people by gait-matching using cameras and wearable accelerometers","authors":"Thiago Teixeira, Deokwoo Jung, G. Dublon, A. Savvides","doi":"10.1109/ICDSC.2009.5289412","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289412","url":null,"abstract":"The ability to localize and identify multiple people is paramount to the inference of high-level activities for informed decision-making. In this paper, we describe the PEM-ID system, which uniquely identifies people tagged with accelerometer nodes in the video output of preinstalled infrastructure cameras. For this, we introduce a new distance measure between signals comprised of timestamps of gait landmarks, and utilize it to identify each tracked person from the video by pairing them with a wearable accelerometer node.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128955375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Joint estimation of offset parameters and high-resolution images via l1-norm minimization principle","authors":"A. Hirabayashi","doi":"10.1109/ICDSC.2009.5289341","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289341","url":null,"abstract":"We propose a joint estimation algorithm of offset parameters and a high resolution image from a set of multiple low resolution images based on the l1-norm minimization principle. Advantages of the joint approach include that, since it uses low-resolution images in a batch manner, we are less suffered from aliasing effects. The l1-norm minimization principle is effective because we assume sparsity on underlying high-resolution images. The proposed algorithm first minimizes the l1-norm of a vector that satisfies data constraint with the offset parameters fixed. Then, the minimum value is further minimized with respect to the parameters. Even though this is a heuristic approach, the computer simulations show that the proposed algorithm perfectly reconstructs sparse images with a probability more than or equal to 99% for large dimensional images. The proposed approach is attractive because of its computational efficiency.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130963993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PhD forum: Probabilistic surveillance with multiple active cameras","authors":"Eric Sommerlade, I. Reid","doi":"10.1109/ICDSC.2009.5289403","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289403","url":null,"abstract":"This work presents a method to control multiple, but diverse pan-tilt-zoom cameras which are sharing overlapping views of the same spatial location for the purpose of observation of this scene. We cast this control input selection problem in an information-theoretic framework, where we maximise the expected mutual information gain in the scene model with respect to the observation parameters. Overall this yields a framework in which heterogeneous active camera types can be integrated cleanly and consistently, obviating the need for a wide-angle supervisor camera or other artificial restrictions on the camera parameter settings.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131295733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takahide Hosokawa, Songkran Jarusirisawad, H. Saito
{"title":"Online video synthesis for removing occluding objects using multiple uncalibrated cameras via plane sweep algorithm","authors":"Takahide Hosokawa, Songkran Jarusirisawad, H. Saito","doi":"10.1109/ICDSC.2009.5289380","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289380","url":null,"abstract":"We present an online rendering system which removes occluding objects in front of the objective scene from an input video using multiple videos taken with multiple cameras. To obtain geometrical relations between all cameras, we use projective grid space (PGS) defined by epipolar geometry between two basis cameras. Then we apply plane-sweep algorithm for generating depth image in the input camera. By excluding the area of occluding objects from the volume of the sweeping planes, we can generate the depthmap without the occluding objects. Using this depthmap, we can render the image without obstacles from all the multiple camera videos. Since we use graphics processing unit (GPU) for computation, we can achieve realtime online rendering using a normal spec PC and multiple USB cameras.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124582227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}