Real-Time Imaging最新文献

筛选
英文 中文
Event detection for intelligent car park video surveillance 智能停车场视频监控中的事件检测
Real-Time Imaging Pub Date : 2005-06-01 DOI: 10.1016/j.rti.2005.02.002
Georgios Diamantopoulos, Michael Spann
{"title":"Event detection for intelligent car park video surveillance","authors":"Georgios Diamantopoulos,&nbsp;Michael Spann","doi":"10.1016/j.rti.2005.02.002","DOIUrl":"10.1016/j.rti.2005.02.002","url":null,"abstract":"<div><p>Intelligent surveillance has become an important research issue due to the high cost and low efficiency of human supervisors, and machine intelligence is required to provide a solution for automated event detection. In this paper we describe a real-time system that has been used for detecting tailgating, an example of complex interactions and activities within a vehicle parking scenario, using an adaptive background learning algorithm and intelligence to overcome the problems of object masking, separation and occlusion. We also show how a generalized framework may be developed for the detection of other complex events.</p></div>","PeriodicalId":101062,"journal":{"name":"Real-Time Imaging","volume":"11 3","pages":"Pages 233-243"},"PeriodicalIF":0.0,"publicationDate":"2005-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.rti.2005.02.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90527390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Learning the Semantic Landscape: embedding scene knowledge in object tracking 学习语义景观:在目标跟踪中嵌入场景知识
Real-Time Imaging Pub Date : 2005-06-01 DOI: 10.1016/j.rti.2004.12.002
D. Greenhill, J. Renno, J. Orwell, G.A. Jones
{"title":"Learning the Semantic Landscape: embedding scene knowledge in object tracking","authors":"D. Greenhill,&nbsp;J. Renno,&nbsp;J. Orwell,&nbsp;G.A. Jones","doi":"10.1016/j.rti.2004.12.002","DOIUrl":"10.1016/j.rti.2004.12.002","url":null,"abstract":"<div><p><span>The accuracy of object tracking methodologies can be significantly improved by utilizing knowledge about the monitored scene. Such scene knowledge includes the homography between the camera and ground planes and the </span><em>occlusion landscape</em> identifying the depth map associated with the static occlusions in the scene. Using the ground plane, a simple method of relating the projected height and width of people objects to image location is used to constrain the dimensions of appearance models. Moreover, trajectory modeling can be greatly improved by performing tracking on the ground-plane tracking using global real-world noise models for the observation and dynamic processes. Finally, the <em>occlusion landscape</em><span> allows the tracker to predict the complete or partial occlusion of object observations. To facilitate </span><em>plug and play</em> functionality, this scene knowledge must be automatically learnt. The paper demonstrates how, over a sufficient length of time, observations from the monitored scene itself can be used to parameterize the <em>semantic landscape</em>.</p></div>","PeriodicalId":101062,"journal":{"name":"Real-Time Imaging","volume":"11 3","pages":"Pages 186-203"},"PeriodicalIF":0.0,"publicationDate":"2005-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.rti.2004.12.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81407337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Detection of cyclic human activities based on the morphological analysis of the inter-frame similarity matrix 基于帧间相似性矩阵形态学分析的循环人体活动检测
Real-Time Imaging Pub Date : 2005-06-01 DOI: 10.1016/j.rti.2005.03.004
Alexandra Branzan Albu, Mehran Yazdi, Robert Bergevin
{"title":"Detection of cyclic human activities based on the morphological analysis of the inter-frame similarity matrix","authors":"Alexandra Branzan Albu,&nbsp;Mehran Yazdi,&nbsp;Robert Bergevin","doi":"10.1016/j.rti.2005.03.004","DOIUrl":"10.1016/j.rti.2005.03.004","url":null,"abstract":"<div><p><span>This paper describes a new method for the temporal segmentation of periodic human activities from continuous real-world indoor video sequences acquired with a static camera. The proposed approach is based on the concept of inter-frame similarity matrix. Indeed, this matrix contains relevant information for the analysis of cyclic and symmetric human activities, where the motion performed during the first semi-cycle is repeated in the opposite direction during the second semi-cycle. Thus, the pattern associated with a periodic activity in the similarity matrix is rectangular and decomposable into elementary units. We propose a morphology-based approach for the detection and analysis of activity patterns. </span>Pattern extraction is further used for the detection of the temporal boundaries of the cyclic symmetric activities. The approach for experimental evaluation is based on a statistical estimation of the ground truth segmentation and on a confidence ratio for temporal segmentations.</p></div>","PeriodicalId":101062,"journal":{"name":"Real-Time Imaging","volume":"11 3","pages":"Pages 219-232"},"PeriodicalIF":0.0,"publicationDate":"2005-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.rti.2005.03.004","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85198800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Optical flow-based real-time object tracking using non-prior training active feature model 基于非先验训练主动特征模型的光流实时目标跟踪
Real-Time Imaging Pub Date : 2005-06-01 DOI: 10.1016/j.rti.2005.03.006
Jeongho Shin , Sangjin Kim , Sangkyu Kang , Seong-Won Lee , Joonki Paik , Besma Abidi , Mongi Abidi
{"title":"Optical flow-based real-time object tracking using non-prior training active feature model","authors":"Jeongho Shin ,&nbsp;Sangjin Kim ,&nbsp;Sangkyu Kang ,&nbsp;Seong-Won Lee ,&nbsp;Joonki Paik ,&nbsp;Besma Abidi ,&nbsp;Mongi Abidi","doi":"10.1016/j.rti.2005.03.006","DOIUrl":"10.1016/j.rti.2005.03.006","url":null,"abstract":"<div><p><span><span>This paper presents a feature-based object tracking algorithm using optical flow under the non-prior training (NPT) active feature model (AFM) framework. The proposed tracking procedure can be divided into three steps: (i) </span>localization of an object-of-interest, (ii) prediction and correction of the object's position by utilizing spatio-temporal information, and (iii) restoration of occlusion using NPT-AFM. The proposed algorithm can track both rigid and </span>deformable objects<span>, and is robust against the object's sudden motion because both a feature point and the corresponding motion direction are tracked at the same time. Tracking performance is not degraded even with complicated background because feature points inside an object are completely separated from background. Finally, the AFM enables stable tracking of occluded objects with maximum 60% occlusion. NPT-AFM, which is one of the major contributions of this paper, removes the off-line, preprocessing step for generating a priori training set. The training set used for model fitting can be updated at each frame to make more robust object's features under occluded situation. The proposed AFM can track deformable, partially occluded objects by using the greatly reduced number of feature points rather than taking entire shapes in the existing shape-based methods. The on-line updating of the training set and reducing the number of feature points can realize a real-time, robust tracking system. Experiments have been performed using several in-house video clips of a static camera including objects such as a robot moving on a floor and people walking both indoor and outdoor. In order to show the performance of the proposed tracking algorithm, some experiments have been performed under noisy and low-contrast environment. For more objective comparison, PETS 2001 and PETS 2002 datasets were also used.</span></p></div>","PeriodicalId":101062,"journal":{"name":"Real-Time Imaging","volume":"11 3","pages":"Pages 204-218"},"PeriodicalIF":0.0,"publicationDate":"2005-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.rti.2005.03.006","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72934453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 98
Real-time foreground–background segmentation using codebook model 基于码本模型的实时前景-背景分割
Real-Time Imaging Pub Date : 2005-06-01 DOI: 10.1016/j.rti.2004.12.004
Kyungnam Kim , Thanarat H. Chalidabhongse , David Harwood , Larry Davis
{"title":"Real-time foreground–background segmentation using codebook model","authors":"Kyungnam Kim ,&nbsp;Thanarat H. Chalidabhongse ,&nbsp;David Harwood ,&nbsp;Larry Davis","doi":"10.1016/j.rti.2004.12.004","DOIUrl":"10.1016/j.rti.2004.12.004","url":null,"abstract":"<div><p><span>We present a real-time algorithm for foreground–background segmentation. Sample background values at each pixel are quantized into codebooks which represent a compressed form of background model for a long image sequence. This allows us to capture structural background variation due to periodic-like motion over a long period of time under limited memory. The codebook representation is efficient in memory and speed compared with other background modeling techniques. Our method can handle scenes containing moving backgrounds or </span>illumination variations<span>, and it achieves robust detection for different types of videos. We compared our method with other multimode modeling techniques.</span></p><p>In addition to the basic algorithm, two features improving the algorithm are presented—layered modeling/detection and adaptive codebook updating.</p><p>For performance evaluation, we have applied perturbation detection rate analysis to four background subtraction algorithms and two videos of different types of scenes.</p></div>","PeriodicalId":101062,"journal":{"name":"Real-Time Imaging","volume":"11 3","pages":"Pages 172-185"},"PeriodicalIF":0.0,"publicationDate":"2005-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.rti.2004.12.004","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90112031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1601
Rule-based real-time detection of context-independent events in video shots 基于规则的视频拍摄中与上下文无关事件的实时检测
Real-Time Imaging Pub Date : 2005-06-01 DOI: 10.1016/j.rti.2004.12.001
Aishy Amer , Eric Dubois , Amar Mitiche
{"title":"Rule-based real-time detection of context-independent events in video shots","authors":"Aishy Amer ,&nbsp;Eric Dubois ,&nbsp;Amar Mitiche","doi":"10.1016/j.rti.2004.12.001","DOIUrl":"10.1016/j.rti.2004.12.001","url":null,"abstract":"<div><p>The purpose of this paper is to investigate a real-time system to detect context-independent events in video shots. We test the system in video surveillance environments with a fixed camera. We assume that objects have been segmented (not necessarily perfectly) and reason with their low-level features, such as shape, and mid-level features, such as trajectory, to infer events related to moving objects.</p><p>Our goal is to detect generic events, i.e., events that are independent of the context of where or how they occur. Events are detected based on a formal definition of these and on approximate but efficient world models. This is done by continually monitoring changes and behavior of features of video objects. When certain conditions are met, events are detected. We classify events into four types: primitive, action, interaction, and composite.</p><p>Our system includes three interacting video processing layers: <em>enhancement</em><span> to estimate and reduce additive noise, </span><em>analysis</em> to segment and track video objects, and <em>interpretation</em> to detect context-independent events. The contributions in this paper are the interpretation of spatio-temporal object features to detect context-independent events in real time, the adaptation to noise, and the correction and compensation of low-level processing errors at higher layers where more information is available.</p><p>The effectiveness and real-time response of our system are demonstrated by extensive experimentation on indoor and outdoor video shots in the presence of multi-object occlusion, different noise levels, and coding artifacts.</p></div>","PeriodicalId":101062,"journal":{"name":"Real-Time Imaging","volume":"11 3","pages":"Pages 244-256"},"PeriodicalIF":0.0,"publicationDate":"2005-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.rti.2004.12.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84482259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Introduction to the special issue on video object processing for surveillance applications 介绍用于监控应用的视频对象处理特刊
Real-Time Imaging Pub Date : 2005-06-01 DOI: 10.1016/j.rti.2005.06.001
Aishy Amer , Carlo Regazzoni
{"title":"Introduction to the special issue on video object processing for surveillance applications","authors":"Aishy Amer ,&nbsp;Carlo Regazzoni","doi":"10.1016/j.rti.2005.06.001","DOIUrl":"10.1016/j.rti.2005.06.001","url":null,"abstract":"","PeriodicalId":101062,"journal":{"name":"Real-Time Imaging","volume":"11 3","pages":"Pages 167-171"},"PeriodicalIF":0.0,"publicationDate":"2005-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.rti.2005.06.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78013799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
SmartSpectra: Applying multispectral imaging to industrial environments SmartSpectra:将多光谱成像应用于工业环境
Real-Time Imaging Pub Date : 2005-04-01 DOI: 10.1016/j.rti.2005.04.007
Joan Vila , Javier Calpe , Filiberto Pla , Luis Gómez , Joseph Connell , John Marchant , Javier Calleja , Michael Mulqueen , Jordi Muñoz , Arnoud Klaren , The SmartSpectra Team
{"title":"SmartSpectra: Applying multispectral imaging to industrial environments","authors":"Joan Vila ,&nbsp;Javier Calpe ,&nbsp;Filiberto Pla ,&nbsp;Luis Gómez ,&nbsp;Joseph Connell ,&nbsp;John Marchant ,&nbsp;Javier Calleja ,&nbsp;Michael Mulqueen ,&nbsp;Jordi Muñoz ,&nbsp;Arnoud Klaren ,&nbsp;The SmartSpectra Team","doi":"10.1016/j.rti.2005.04.007","DOIUrl":"10.1016/j.rti.2005.04.007","url":null,"abstract":"<div><p><span><span>SmartSpectra is a smart multispectral system for industrial, environmental, and commercial applications where the use of spectral information beyond the visible range is needed. The SmartSpectra system provides six </span>spectral bands in the range 400–1000</span> <span>nm. The bands are configurable in terms of central wavelength and bandwidth by using electronic tunable filters. SmartSpectra consists of a multispectral sensor and the software that controls the system and simplifies the acquisition process. A first prototype called Autonomous Tunable Filter System is already available. This paper describes the SmartSpectra system, demonstrates its performance in the estimation of chlorophyll in plant leaves, and discusses its implications in real-time applications.</span></p></div>","PeriodicalId":101062,"journal":{"name":"Real-Time Imaging","volume":"11 2","pages":"Pages 85-98"},"PeriodicalIF":0.0,"publicationDate":"2005-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.rti.2005.04.007","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85244594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Digital zooming for color filter array-based image sensors 基于彩色滤光片阵列的数字变焦图像传感器
Real-Time Imaging Pub Date : 2005-04-01 DOI: 10.1016/j.rti.2005.01.002
Rastislav Lukac , Konstantinos N. Plataniotis
{"title":"Digital zooming for color filter array-based image sensors","authors":"Rastislav Lukac ,&nbsp;Konstantinos N. Plataniotis","doi":"10.1016/j.rti.2005.01.002","DOIUrl":"10.1016/j.rti.2005.01.002","url":null,"abstract":"<div><p><span>In this paper, zooming methods which operate directly on color filter array (CFA) data are proposed, analyzed, and evaluated. Under the proposed framework enlarged spatial resolution images are generated directly from the CFA-based image sensors. The reduced computational complexity of the proposed schemes makes them ideal for real-time surveillance systems, industrial strength computer vision<span> solutions, and mobile sensor-based visual systems. Simulation studies reported here indicate that the new methods (i) produce excellent results, in terms of both objective and subjective evaluation metrics, and (ii) outperform conventional zooming schemes operating in the </span></span><em>RGB</em> domain.</p></div>","PeriodicalId":101062,"journal":{"name":"Real-Time Imaging","volume":"11 2","pages":"Pages 129-138"},"PeriodicalIF":0.0,"publicationDate":"2005-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.rti.2005.01.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81670866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Plant disease detection based on data fusion of hyper-spectral and multi-spectral fluorescence imaging using Kohonen maps 基于Kohonen图的高光谱和多光谱荧光成像数据融合的植物病害检测
Real-Time Imaging Pub Date : 2005-04-01 DOI: 10.1016/j.rti.2005.03.003
D. Moshou , C. Bravo , R. Oberti , J. West , L. Bodria , A. McCartney , H. Ramon
{"title":"Plant disease detection based on data fusion of hyper-spectral and multi-spectral fluorescence imaging using Kohonen maps","authors":"D. Moshou ,&nbsp;C. Bravo ,&nbsp;R. Oberti ,&nbsp;J. West ,&nbsp;L. Bodria ,&nbsp;A. McCartney ,&nbsp;H. Ramon","doi":"10.1016/j.rti.2005.03.003","DOIUrl":"10.1016/j.rti.2005.03.003","url":null,"abstract":"<div><p>The objective of this research was to develop a ground-based real-time remote sensing system for detecting diseases in arable crops under field conditions and in an early stage of disease development, before it can visibly be detected. This was achieved through sensor fusion of hyper-spectral reflection information between 450 and 900<!--> <!-->nm and fluorescence imaging. The work reported here used yellow rust (<em>Puccinia striiformis</em><span>) disease of winter wheat as a model system for testing the featured technologies. Hyper-spectral reflection images of healthy and infected plants were taken with an imaging spectrograph under field circumstances and ambient lighting conditions. Multi-spectral fluorescence images were taken simultaneously on the same plants using UV-blue excitation. Through comparison of the 550 and 690</span> <!-->nm fluorescence images, it was possible to detect disease presence. The fraction of pixels in one image, recognized as diseased, was set as the final fluorescence disease variable called the lesion index (<span><math><mrow><mi>LI</mi></mrow></math></span><span>). A spectral reflection method, based on only three wavebands, was developed that could discriminate disease from healthy with an overall error of about 11.3%. The method based on fluorescence was less accurate with an overall discrimination error of about 16.5%. However, fusing the measurements from the two approaches together allowed overall disease from healthy discrimination of 94.5% by using QDA. Data fusion was also performed using a Self-Organizing Map (SOM) neural network which decreased the overall classification error<span> to 1%. The possible implementation of the SOM-based disease classifier for rapid retraining in the field is discussed. Further, the real-time aspects of the acquisition and processing of spectral and fluorescence images are discussed. With the proposed adaptations the multi-sensor fusion disease detection system can be applied in the real-time detection of plant disease in the field.</span></span></p></div>","PeriodicalId":101062,"journal":{"name":"Real-Time Imaging","volume":"11 2","pages":"Pages 75-83"},"PeriodicalIF":0.0,"publicationDate":"2005-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.rti.2005.03.003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83763261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 183
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信