{"title":"Detection of regions of interest and camouflage breaking by direct convexity estimation","authors":"A. Tankus, Y. Yeshurun","doi":"10.1109/WVS.1998.646019","DOIUrl":"https://doi.org/10.1109/WVS.1998.646019","url":null,"abstract":"Detection of regions of interest is usually based on edge maps. We suggest a novel nonedge-based mechanism for detection of regions of interest, which extracts 3D information from the image. Our operator detects smooth 3D convex and concave objects based on direct processing of intensity values. Invariance to a large family of functions is mathematically proved. It follows that our operator is robust to variation in illumination, or orientation, and scale, in contrast with most other attentional operators. The operator is also demonstrated to efficiently detect 3D objects camouflaged in noisy areas. An extensive comparison, with edge-based attentional operators is delineated.","PeriodicalId":359599,"journal":{"name":"Proceedings 1998 IEEE Workshop on Visual Surveillance","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124380019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trevor Darrell, G. Gordon, J. Woodfill, H. Baker, M. Harville
{"title":"Robust, real-time people tracking in open environments using integrated stereo, color, and face detection","authors":"Trevor Darrell, G. Gordon, J. Woodfill, H. Baker, M. Harville","doi":"10.1109/WVS.1998.646017","DOIUrl":"https://doi.org/10.1109/WVS.1998.646017","url":null,"abstract":"We present approach to robust real-time person tracking in crowded and/or unknown environments using multimodal integration. We combine stereo, color, and face detection modules into a single robust system, and show an initial application for an interactive display where the user sees his face distorted into various comic poses in real-time. Stereo processing is used to isolate the figure of a user from other objects and people in the background. Skin-hue classification identifies and tracks likely body parts within the foreground region, and face pattern detection discriminates and localizes the face within the tracked body parts. We discuss the failure modes of these individual components, and report results with the complete system in trials with thousands of users.","PeriodicalId":359599,"journal":{"name":"Proceedings 1998 IEEE Workshop on Visual Surveillance","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127635967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Attentional control for visual surveillance","authors":"R. Howarth, H. Buxton","doi":"10.1109/WVS.1998.646025","DOIUrl":"https://doi.org/10.1109/WVS.1998.646025","url":null,"abstract":"The paper introduces a general framework for attentional control related to current work and looks at the HIVIS-WATCHER implementation. This system uses a separation of simple and complex operators that are formally defined within a deictic frame of reference and coordinated by the \"official-observer\". The purpose of the separation is to avoid complex attentive processing except for objects selected as likely to be interacting in a task-related way by preattentive cues, e.g. mutual-proximity. The deictic representation allows typical behaviour to be modelled from a selected agent's point of view. The behavioral interaction used for illustration here is \"overtaking\" which involves aspects of the heading, speed and relative position. A tasknet implemented as a static Bayesian belief network integrates this evidence to infer the likely episode as it evolves in the dynamic scene. Tables for the combination of deictic viewpoints and results from looking for likely overtaking behaviour are presented. Conclusions and future work centre on the need for full integration of early visual processing and object recognition with the high-level behavioural interpretation system.","PeriodicalId":359599,"journal":{"name":"Proceedings 1998 IEEE Workshop on Visual Surveillance","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128432516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time active visual surveillance by integrating peripheral motion detection with foveated tracking","authors":"J. Batista, P. Peixoto, Helder Sabino de Araújo","doi":"10.1109/WVS.1998.646016","DOIUrl":"https://doi.org/10.1109/WVS.1998.646016","url":null,"abstract":"In this paper we describe an active binocular tracking system integrating peripheral motion detection. The system is made up of a binocular active system used to track the objects and a fixed camera providing wide angle images of the environment. The system can cope with changes in lighting conditions by adjusting aperture and focus. Binocular flow enables tracking of nonrigid objects even when partial occlusion occurs.","PeriodicalId":359599,"journal":{"name":"Proceedings 1998 IEEE Workshop on Visual Surveillance","volume":"181 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126172808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"T/sup 3/wT: tracking turning trucks with trailers","authors":"H. Nagel, T. Schwarz, H. Leuck, M. Haag","doi":"10.1109/WVS.1998.646022","DOIUrl":"https://doi.org/10.1109/WVS.1998.646022","url":null,"abstract":"A system for model-based tracking of road vehicles in digitized video sequences of traffic scenes has been generalized to handle \"truck-and-trailer\"-configurations. Whereas, previously, each vehicle had been modeled as a single rigid polyhedron, the generalisation handles two or more rigid components modeled as polyhedra with (one degree of freedom) rotational joints between consecutive components. Such an approach could be kept simple by incorporating the steering angle for the front wheels of each component into the overall state vector to be estimated. The feasibility of this approach is demonstrated by the evaluation of three image sequences with different truck-and-trailer configurations recorded at various inner-city road intersections.","PeriodicalId":359599,"journal":{"name":"Proceedings 1998 IEEE Workshop on Visual Surveillance","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130480502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improved illumination assessment for vision-based traffic monitoring","authors":"Lambert Wixson, Keith Hanna, Deepam Mishra","doi":"10.1109/WVS.1998.646018","DOIUrl":"https://doi.org/10.1109/WVS.1998.646018","url":null,"abstract":"Vision systems that must operate autonomously over varying environment conditions often must use different parameter values or algorithms depending on these conditions. A key problem is how to automatically assess the incoming imagery to determine these appropriate parameters and algorithms. This paper presents methods for such assessment. Specifically it presents measures for determining whether the scene is well-lit (i.e. whether objects' entire extent is visible, versus just their lights), whether the scene has sufficient contrast, and whether objects are casting shadows. The methods are applied in the domain of traffic monitoring, are based on empirical data, and have been deployed in hundreds of installed traffic monitoring systems. Here, we present data obtained videotape segments covering 34 representative installations. The paper includes performance data and an improved algorithm for determining whether the scene is well-lit.","PeriodicalId":359599,"journal":{"name":"Proceedings 1998 IEEE Workshop on Visual Surveillance","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133873955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Front propagation and level-set approach for geodesic active stereovision","authors":"R. Deriche, C. Bouvin, O. Faugeras","doi":"10.1109/WVS.1998.646021","DOIUrl":"https://doi.org/10.1109/WVS.1998.646021","url":null,"abstract":"Given a weakly calibrated stereo system and a virtual 3D surveillance plane specified by any 3 points given by an external operator we describe a framework for matching complex 2D planar curves lying at the intersection of the 3D surveillance plane and the 3D scene being observed. This important information may then be used to know which parts of the objects being observed are between the stereo system and the virtual 3D surveillance plane, and which parts are behind the 3D virtual surveillance plane i.e. outside a security zone specified around the stereo system. Using an energy minimization based approach, we reformulate this stereo problem as a front propagation problem. The Euler Lagrange equation of the designed energy functional is derived and the flow minimizing the energy is obtained. This original scheme may be viewed as a geodesic active stereo model which basically attract the given curves to the bottom of a potential well corresponding to pixels having similar intensities. Using the level set formulation scheme of Osher and Sethian (1988), complex curves can be matched and topological changes for the evolving curves are naturally managed. The final result is also relatively independent of the curve initialization. Promising experimental results have been obtained on various real images.","PeriodicalId":359599,"journal":{"name":"Proceedings 1998 IEEE Workshop on Visual Surveillance","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121990961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuntao Cui, S. Samarasekera, Qian Huang, Michael Grei
{"title":"Indoor monitoring via the collaboration between a peripheral sensor and a foveal sensor","authors":"Yuntao Cui, S. Samarasekera, Qian Huang, Michael Grei","doi":"10.1109/WVS.1998.646014","DOIUrl":"https://doi.org/10.1109/WVS.1998.646014","url":null,"abstract":"In this paper, we describe a novel setup which uses a peripheral sensor and a foveal sensor to perform the indoor monitoring task. The system has the following features: 1) it is constantly aware of the global surrounding, 2) it can dynamically determine its attention to focus on some important events, 3) it achieves real time and reliable performance. In the current implementation, we focus on the events which are triggered by moving persons. Two types of visual agents are used to track moving objects: a peripheral sensing agent that performs global monitoring tasks and a foveal agent that performs focused monitoring tasks. These two heterogeneous sensing agents are coupled in a unique way to work not only asynchronously but also collaboratively via a facilitator. Compared with existing approaches, the proposed scheme provides many advantages in handling the unrestricted movement and various types of occlusions.","PeriodicalId":359599,"journal":{"name":"Proceedings 1998 IEEE Workshop on Visual Surveillance","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129153101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}