来自多个视频流的场景和内容分析

S. Guler
{"title":"来自多个视频流的场景和内容分析","authors":"S. Guler","doi":"10.1109/AIPR.2001.991213","DOIUrl":null,"url":null,"abstract":"In this paper, we describe a framework for video analysis and a method to detect and understand the class of events we refer to as \"split and merge events\" from single or multiple video streams. We start with automatic detection of scene changes, including camera operations such as zoom, pan, tilts and scene cuts. For each new scene, camera calibration is performed, the scene geometry is estimated, to determine the absolute positions for each detected object. Objects in the video scenes are detected using an adaptive background subtraction method and tracked over consecutive frames. Objects are detected and tracked in a way to identify the key split and merge behaviors where one object splits into two or more objects and two or more objects merge into one object. We have identified split and merge behaviors as the key behavior components for several higher level activities such package drop-off, exchange between people, people getting out of cars or forming crowds etc. We embed the data about scenes, camera parameters, object features, positions into the video stream as metadata to correlate, compare and associate the results for several related scenes and achieve better video event understanding. This location for the detailed syntactic information allows it to be physically associated with the video itself and guarantees that analysis results will be preserved while in archival storage or when sub-clips are created for distribution to other users. We present some preliminary results over single and multiple video streams.","PeriodicalId":277181,"journal":{"name":"Proceedings 30th Applied Imagery Pattern Recognition Workshop (AIPR 2001). Analysis and Understanding of Time Varying Imagery","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2001-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Scene and content analysis from multiple video streams\",\"authors\":\"S. Guler\",\"doi\":\"10.1109/AIPR.2001.991213\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we describe a framework for video analysis and a method to detect and understand the class of events we refer to as \\\"split and merge events\\\" from single or multiple video streams. We start with automatic detection of scene changes, including camera operations such as zoom, pan, tilts and scene cuts. For each new scene, camera calibration is performed, the scene geometry is estimated, to determine the absolute positions for each detected object. Objects in the video scenes are detected using an adaptive background subtraction method and tracked over consecutive frames. Objects are detected and tracked in a way to identify the key split and merge behaviors where one object splits into two or more objects and two or more objects merge into one object. We have identified split and merge behaviors as the key behavior components for several higher level activities such package drop-off, exchange between people, people getting out of cars or forming crowds etc. We embed the data about scenes, camera parameters, object features, positions into the video stream as metadata to correlate, compare and associate the results for several related scenes and achieve better video event understanding. This location for the detailed syntactic information allows it to be physically associated with the video itself and guarantees that analysis results will be preserved while in archival storage or when sub-clips are created for distribution to other users. We present some preliminary results over single and multiple video streams.\",\"PeriodicalId\":277181,\"journal\":{\"name\":\"Proceedings 30th Applied Imagery Pattern Recognition Workshop (AIPR 2001). Analysis and Understanding of Time Varying Imagery\",\"volume\":\"6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2001-10-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings 30th Applied Imagery Pattern Recognition Workshop (AIPR 2001). Analysis and Understanding of Time Varying Imagery\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AIPR.2001.991213\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings 30th Applied Imagery Pattern Recognition Workshop (AIPR 2001). Analysis and Understanding of Time Varying Imagery","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIPR.2001.991213","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

摘要

在本文中,我们描述了一个视频分析框架和一种方法来检测和理解从单个或多个视频流中我们称之为“拆分和合并事件”的事件类。我们从场景变化的自动检测开始,包括相机操作,如变焦、平移、倾斜和场景剪切。对于每个新场景,执行相机校准,估计场景几何形状,以确定每个检测到的物体的绝对位置。使用自适应背景减法检测视频场景中的对象,并在连续帧上跟踪。检测和跟踪对象的方式是识别关键的拆分和合并行为,其中一个对象拆分为两个或多个对象,两个或多个对象合并为一个对象。我们已经确定了分裂和合并行为作为几个高级活动的关键行为组成部分,如包裹掉落,人与人之间的交换,人们下车或形成人群等。我们将场景、摄像机参数、物体特征、位置等数据作为元数据嵌入到视频流中,对多个相关场景的结果进行关联、比较和关联,从而更好地理解视频事件。详细语法信息的这个位置允许它在物理上与视频本身相关联,并保证在存档存储或创建子剪辑以分发给其他用户时,分析结果将被保留。我们在单视频流和多视频流上给出了一些初步结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Scene and content analysis from multiple video streams
In this paper, we describe a framework for video analysis and a method to detect and understand the class of events we refer to as "split and merge events" from single or multiple video streams. We start with automatic detection of scene changes, including camera operations such as zoom, pan, tilts and scene cuts. For each new scene, camera calibration is performed, the scene geometry is estimated, to determine the absolute positions for each detected object. Objects in the video scenes are detected using an adaptive background subtraction method and tracked over consecutive frames. Objects are detected and tracked in a way to identify the key split and merge behaviors where one object splits into two or more objects and two or more objects merge into one object. We have identified split and merge behaviors as the key behavior components for several higher level activities such package drop-off, exchange between people, people getting out of cars or forming crowds etc. We embed the data about scenes, camera parameters, object features, positions into the video stream as metadata to correlate, compare and associate the results for several related scenes and achieve better video event understanding. This location for the detailed syntactic information allows it to be physically associated with the video itself and guarantees that analysis results will be preserved while in archival storage or when sub-clips are created for distribution to other users. We present some preliminary results over single and multiple video streams.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信