一种新的动态场景运动分割融合框架

IF 1.8 Q3 REMOTE SENSING
Lazhar Khelifi, M. Mignotte
{"title":"一种新的动态场景运动分割融合框架","authors":"Lazhar Khelifi, M. Mignotte","doi":"10.1080/19479832.2021.1900408","DOIUrl":null,"url":null,"abstract":"ABSTRACT Motion segmentation in dynamic scenes is currently widely dominated by parametric methods based on deep neural networks. The present study explores the unsupervised segmentation approach that can be used in the absence of training data to segment new videos. In particular, it tackles the task of dynamic texture segmentation. By automatically assigning a single class label to each region or group, this task consists of clustering into groups complex phenomena and characteristics which are both spatially and temporally repetitive. We present an effective fusion framework for motion segmentation in dynamic scenes (FFMS). This model is designed to merge different segmentation maps that contain multiple and weak quality regions in order to achieve a more accurate final result of segmentation. The diverse labelling fields required for the combination process are obtained by a simplified grouping scheme applied to an input video (on the basis of a three orthogonal planes: , and ). Experiments conducted on two challenging datasets (SynthDB and YUP++) show that, contrary to current motion segmentation approaches that either require parameter estimation or a training step, FFMS is significantly faster, easier to code, simple and has limited parameters.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"12 1","pages":"99 - 121"},"PeriodicalIF":1.8000,"publicationDate":"2021-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2021.1900408","citationCount":"2","resultStr":"{\"title\":\"A new fusion framework for motion segmentation in dynamic scenes\",\"authors\":\"Lazhar Khelifi, M. Mignotte\",\"doi\":\"10.1080/19479832.2021.1900408\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT Motion segmentation in dynamic scenes is currently widely dominated by parametric methods based on deep neural networks. The present study explores the unsupervised segmentation approach that can be used in the absence of training data to segment new videos. In particular, it tackles the task of dynamic texture segmentation. By automatically assigning a single class label to each region or group, this task consists of clustering into groups complex phenomena and characteristics which are both spatially and temporally repetitive. We present an effective fusion framework for motion segmentation in dynamic scenes (FFMS). This model is designed to merge different segmentation maps that contain multiple and weak quality regions in order to achieve a more accurate final result of segmentation. The diverse labelling fields required for the combination process are obtained by a simplified grouping scheme applied to an input video (on the basis of a three orthogonal planes: , and ). Experiments conducted on two challenging datasets (SynthDB and YUP++) show that, contrary to current motion segmentation approaches that either require parameter estimation or a training step, FFMS is significantly faster, easier to code, simple and has limited parameters.\",\"PeriodicalId\":46012,\"journal\":{\"name\":\"International Journal of Image and Data Fusion\",\"volume\":\"12 1\",\"pages\":\"99 - 121\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2021-04-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1080/19479832.2021.1900408\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Image and Data Fusion\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/19479832.2021.1900408\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"REMOTE SENSING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Image and Data Fusion","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/19479832.2021.1900408","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"REMOTE SENSING","Score":null,"Total":0}
引用次数: 2

摘要

摘要动态场景中的运动分割目前广泛采用基于深度神经网络的参数化方法。本研究探索了在没有训练数据的情况下可以用于分割新视频的无监督分割方法。特别是,它解决了动态纹理分割的任务。通过自动为每个区域或组分配一个类标签,该任务包括将复杂的现象和特征聚类到组中,这些现象和特征在空间和时间上都是重复的。提出了一种有效的动态场景运动分割融合框架。该模型旨在合并包含多个弱质量区域的不同分割图,以获得更准确的分割最终结果。组合过程所需的不同标记字段是通过应用于输入视频的简化分组方案获得的(基于三个正交平面:、和)。在两个具有挑战性的数据集(SynthDB和YUP++)上进行的实验表明,与当前需要参数估计或训练步骤的运动分割方法相反,FFMS明显更快、更容易编码、简单且参数有限。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A new fusion framework for motion segmentation in dynamic scenes
ABSTRACT Motion segmentation in dynamic scenes is currently widely dominated by parametric methods based on deep neural networks. The present study explores the unsupervised segmentation approach that can be used in the absence of training data to segment new videos. In particular, it tackles the task of dynamic texture segmentation. By automatically assigning a single class label to each region or group, this task consists of clustering into groups complex phenomena and characteristics which are both spatially and temporally repetitive. We present an effective fusion framework for motion segmentation in dynamic scenes (FFMS). This model is designed to merge different segmentation maps that contain multiple and weak quality regions in order to achieve a more accurate final result of segmentation. The diverse labelling fields required for the combination process are obtained by a simplified grouping scheme applied to an input video (on the basis of a three orthogonal planes: , and ). Experiments conducted on two challenging datasets (SynthDB and YUP++) show that, contrary to current motion segmentation approaches that either require parameter estimation or a training step, FFMS is significantly faster, easier to code, simple and has limited parameters.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.00
自引率
0.00%
发文量
10
期刊介绍: International Journal of Image and Data Fusion provides a single source of information for all aspects of image and data fusion methodologies, developments, techniques and applications. Image and data fusion techniques are important for combining the many sources of satellite, airborne and ground based imaging systems, and integrating these with other related data sets for enhanced information extraction and decision making. Image and data fusion aims at the integration of multi-sensor, multi-temporal, multi-resolution and multi-platform image data, together with geospatial data, GIS, in-situ, and other statistical data sets for improved information extraction, as well as to increase the reliability of the information. This leads to more accurate information that provides for robust operational performance, i.e. increased confidence, reduced ambiguity and improved classification enabling evidence based management. The journal welcomes original research papers, review papers, shorter letters, technical articles, book reviews and conference reports in all areas of image and data fusion including, but not limited to, the following aspects and topics: • Automatic registration/geometric aspects of fusing images with different spatial, spectral, temporal resolutions; phase information; or acquired in different modes • Pixel, feature and decision level fusion algorithms and methodologies • Data Assimilation: fusing data with models • Multi-source classification and information extraction • Integration of satellite, airborne and terrestrial sensor systems • Fusing temporal data sets for change detection studies (e.g. for Land Cover/Land Use Change studies) • Image and data mining from multi-platform, multi-source, multi-scale, multi-temporal data sets (e.g. geometric information, topological information, statistical information, etc.).
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信