{"title":"Automatic Representation and Segmentation of Video Sequences via a Novel Framework Based on the nD-EVM and Kohonen Networks","authors":"José-Yovany Luis-García, R. Pérez-Aguila","doi":"10.1155/2016/6361237","DOIUrl":null,"url":null,"abstract":"Recently in the Computer Vision field, a subject of interest, at least in almost every video application based on scene content, is video segmentation. Some of these applications are indexing, surveillance, medical imaging, event analysis, and computer-guided surgery, for naming some of them. To achieve their goals, these applications need meaningful information about a video sequence, in order to understand the events in its corresponding scene. Therefore, we need semantic information which can be obtained from objects of interest that are present in the scene. In order to recognize objects we need to compute features which aid the finding of similarities and dissimilarities, among other characteristics. For this reason, one of the most important tasks for video and image processing is segmentation. The segmentation process consists in separating data into groups that share similar features. Based on this, in this work we propose a novel framework for video representation and segmentation. The main workflow of this framework is given by the processing of an input frame sequence in order to obtain, as output, a segmented version. For video representation we use the Extreme Vertices Model in the -Dimensional Space while we use the Discrete Compactness descriptor as feature and Kohonen Self-Organizing Maps for segmentation purposes.","PeriodicalId":7253,"journal":{"name":"Adv. Artif. Intell.","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2016-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Adv. Artif. Intell.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1155/2016/6361237","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Recently in the Computer Vision field, a subject of interest, at least in almost every video application based on scene content, is video segmentation. Some of these applications are indexing, surveillance, medical imaging, event analysis, and computer-guided surgery, for naming some of them. To achieve their goals, these applications need meaningful information about a video sequence, in order to understand the events in its corresponding scene. Therefore, we need semantic information which can be obtained from objects of interest that are present in the scene. In order to recognize objects we need to compute features which aid the finding of similarities and dissimilarities, among other characteristics. For this reason, one of the most important tasks for video and image processing is segmentation. The segmentation process consists in separating data into groups that share similar features. Based on this, in this work we propose a novel framework for video representation and segmentation. The main workflow of this framework is given by the processing of an input frame sequence in order to obtain, as output, a segmented version. For video representation we use the Extreme Vertices Model in the -Dimensional Space while we use the Discrete Compactness descriptor as feature and Kohonen Self-Organizing Maps for segmentation purposes.