2009 10th Workshop on Image Analysis for Multimedia Interactive Services最新文献

筛选
英文 中文
Neighborhood-based feature weighting for relevance feedback in content-based retrieval 基于邻域特征加权的内容检索相关反馈
2009 10th Workshop on Image Analysis for Multimedia Interactive Services Pub Date : 2009-05-06 DOI: 10.1109/WIAMIS.2009.5031477
Luca Piras, G. Giacinto
{"title":"Neighborhood-based feature weighting for relevance feedback in content-based retrieval","authors":"Luca Piras, G. Giacinto","doi":"10.1109/WIAMIS.2009.5031477","DOIUrl":"https://doi.org/10.1109/WIAMIS.2009.5031477","url":null,"abstract":"High retrieval precision in content-based image retrieval can be attained by adopting relevance feedback mechanisms. In this paper we propose a weighted similarity measure based on the nearest-neighbor relevance feedback technique proposed by the authors. Each image is ranked according to a relevance score depending on nearest-neighbor distances from relevant and non-relevant images. Distances are computed by a weighted measure, the weights being related to the capability of feature spaces of representing relevant images as nearest-neighbors. This approach is proposed to weights individual features, feature subsets, and also to weight relevance scores computed from different feature spaces. Reported results show that the proposed weighting scheme improves the performances with respect to unweighed distances, and to other weighting schemes.","PeriodicalId":233839,"journal":{"name":"2009 10th Workshop on Image Analysis for Multimedia Interactive Services","volume":"647 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123968632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Comparative evaluation of spatial context techniques for semantic image analysis 语义图像分析的空间语境技术的比较评价
2009 10th Workshop on Image Analysis for Multimedia Interactive Services Pub Date : 2009-05-06 DOI: 10.1109/WIAMIS.2009.5031457
G. Papadopoulos, C. Saathoff, M. Grzegorzek, V. Mezaris, Y. Kompatsiaris, Steffen Staab, M. Strintzis
{"title":"Comparative evaluation of spatial context techniques for semantic image analysis","authors":"G. Papadopoulos, C. Saathoff, M. Grzegorzek, V. Mezaris, Y. Kompatsiaris, Steffen Staab, M. Strintzis","doi":"10.1109/WIAMIS.2009.5031457","DOIUrl":"https://doi.org/10.1109/WIAMIS.2009.5031457","url":null,"abstract":"In this paper, two approaches to utilizing contextual information in semantic image analysis are presented and comparatively evaluated. Both approaches make use of spatial context in the form of fuzzy directional relations. The first one is based on a Genetic Algorithm (GA), which is employed in order to decide upon the optimal semantic image interpretation by treating semantic image analysis as a global optimization problem. On the other hand, the second method follows a Binary Integer Programming (BIP) technique for estimating the optimal solution. Both spatial context techniques are evaluated with several different combinations of classifiers and low-level features, in order to demonstrate the improvements attained using spatial context in a number of different image analysis schemes.","PeriodicalId":233839,"journal":{"name":"2009 10th Workshop on Image Analysis for Multimedia Interactive Services","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129911877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Probabilistic graphicalmodels for human motion tracking 人体运动跟踪的概率图形模型
2009 10th Workshop on Image Analysis for Multimedia Interactive Services Pub Date : 2009-05-06 DOI: 10.1109/WIAMIS.2009.5031453
José I. Gómez, M. Marín-Jiménez, N. P. D. L. Blanca
{"title":"Probabilistic graphicalmodels for human motion tracking","authors":"José I. Gómez, M. Marín-Jiménez, N. P. D. L. Blanca","doi":"10.1109/WIAMIS.2009.5031453","DOIUrl":"https://doi.org/10.1109/WIAMIS.2009.5031453","url":null,"abstract":"Graphical models have proved to be very efficient models for labeling image data. In particular, they have been used to label data samples fromhuman body images. In this paper, a DTG-based graphical model is studied for human-body landmark localization and tracking along the image sequence. Experimental results on human motion databases are shown.","PeriodicalId":233839,"journal":{"name":"2009 10th Workshop on Image Analysis for Multimedia Interactive Services","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132018636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Video analysis for browsing and printing 视频分析浏览和打印
2009 10th Workshop on Image Analysis for Multimedia Interactive Services Pub Date : 2009-05-06 DOI: 10.1109/WIAMIS.2009.5031469
Q. Lin, Tong Zhang, Mei Chen, Yining Deng, C. B. Atkins
{"title":"Video analysis for browsing and printing","authors":"Q. Lin, Tong Zhang, Mei Chen, Yining Deng, C. B. Atkins","doi":"10.1109/WIAMIS.2009.5031469","DOIUrl":"https://doi.org/10.1109/WIAMIS.2009.5031469","url":null,"abstract":"More and more home videos have been generated with the ever growing popularity of digital cameras and camcorders. In many cases of home video, a photo, whether capturing a moment or a scene within the video, provides a complementary representation to the video. In this paper, a complete solution of video to photo is presented. The intent of the user is first derived by analyzing video motions. Then, photos are produced accordingly from the video. They can be keyframes at video highlights, panorama of the scene, or high-resolution frames. Methods and results of camera motion mining, intelligent keyframe extraction, video frame stitching and super-resolution enhancement are described.","PeriodicalId":233839,"journal":{"name":"2009 10th Workshop on Image Analysis for Multimedia Interactive Services","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126269902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Repetition density-based approach for TV program extraction 基于重复密度的电视节目提取方法
2009 10th Workshop on Image Analysis for Multimedia Interactive Services Pub Date : 2009-05-06 DOI: 10.1109/WIAMIS.2009.5031463
Gaël Manson, Sid-Ahmed Berrani
{"title":"Repetition density-based approach for TV program extraction","authors":"Gaël Manson, Sid-Ahmed Berrani","doi":"10.1109/WIAMIS.2009.5031463","DOIUrl":"https://doi.org/10.1109/WIAMIS.2009.5031463","url":null,"abstract":"This paper addresses the problem of automatic TV broadcasted program extraction. It consists firstly of precisely determining the start and the end of each broadcasted TV program, and then of properly giving them a name. The extracted programs can be used to build novel services like TV-on-Demand. The proposed solution is based on the density study of repeated audiovisual sequences. This study allows to sort out most of the inter-programs from the repeated sequences. The effectiveness of our solution has been shown on two distinct real TV streams lasting 5 days. A comparative evaluation with traditional approaches has also been performed (metadata-based and silences-and-monochrome-frames-based).","PeriodicalId":233839,"journal":{"name":"2009 10th Workshop on Image Analysis for Multimedia Interactive Services","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122287336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Coarse-to-fine moving region segmentation in compressed video 压缩视频中粗到精的运动区域分割
2009 10th Workshop on Image Analysis for Multimedia Interactive Services Pub Date : 2009-05-06 DOI: 10.1109/WIAMIS.2009.5031428
Yue Chen, I. Bajić, Parvaneh Saeedi
{"title":"Coarse-to-fine moving region segmentation in compressed video","authors":"Yue Chen, I. Bajić, Parvaneh Saeedi","doi":"10.1109/WIAMIS.2009.5031428","DOIUrl":"https://doi.org/10.1109/WIAMIS.2009.5031428","url":null,"abstract":"In this paper, we propose a coarse-to-fine segmentation method for extracting moving regions from compressed video. First, motion vectors are clustered to provide a coarse segmentation of moving regions at block level. Second, boundaries between moving regions are identified, and finally, a fine segmentation is performed within boundary regions using edge and color information. Experimental results show that the proposed method can segment moving regions fairly accurately, with sensitivity of 85% or higher, and specificity of over 95%.","PeriodicalId":233839,"journal":{"name":"2009 10th Workshop on Image Analysis for Multimedia Interactive Services","volume":"2012 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121510765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
3D shape from multi-camera views by error projection minimization 从多相机视图的三维形状的误差投影最小化
2009 10th Workshop on Image Analysis for Multimedia Interactive Services Pub Date : 2009-05-06 DOI: 10.1109/WIAMIS.2009.5031480
G. Haro, M. Pardàs
{"title":"3D shape from multi-camera views by error projection minimization","authors":"G. Haro, M. Pardàs","doi":"10.1109/WIAMIS.2009.5031480","DOIUrl":"https://doi.org/10.1109/WIAMIS.2009.5031480","url":null,"abstract":"Traditional shape from silhouette methods compute the 3D shape as the intersection of the back-projected silhouettes in the 3D space, the so called visual hull. However, silhouettes that have been obtained with background subtraction techniques often present miss-detection errors (produced by false negatives or occlusions) which produce incomplete 3D shapes. Our approach deals with miss-detections and noise in the silhouettes. We recover the voxel occupancy which describes the 3D shape by minimizing an energy based on an approximation of the error between the shape 2D projections and the silhouettes. The energy also includes regularization and takes into account the visibility of the voxels in each view in order to handle self-occlusions.","PeriodicalId":233839,"journal":{"name":"2009 10th Workshop on Image Analysis for Multimedia Interactive Services","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125617982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fully automatic inpainting method for complex image content 全自动绘图方法,复杂的图像内容
2009 10th Workshop on Image Analysis for Multimedia Interactive Services Pub Date : 2009-05-06 DOI: 10.1109/WIAMIS.2009.5031465
Martin Köppel, D. Doshkov, P. Ndjiki-Nya
{"title":"Fully automatic inpainting method for complex image content","authors":"Martin Köppel, D. Doshkov, P. Ndjiki-Nya","doi":"10.1109/WIAMIS.2009.5031465","DOIUrl":"https://doi.org/10.1109/WIAMIS.2009.5031465","url":null,"abstract":"A novel, fully automatic framework for restoration of unknown or damaged picture areas is presented. Diverse causes as an accident, manual removal, or transmission loss may have lead to the missing visual information. The challenge then consists in repairing the occluded or missing image regions in an undetectable way. Here, assumption is made that dominant structures are of salient relevance to the human perception. Hence, they are accounted for in the filling process by using tensor voting, which is a structure inference approach based on the Gestalt laws of proximity and good continuation. In fact, based on a new segmentation-based inference mechanism presented in this paper, missing textures crossing dominant structures are robustly recovered. An efficient post-processing step based on cloning via covariant derivatives improves the visual quality of the inpainted textures. The proposed method yields significantly better results than previous approaches.","PeriodicalId":233839,"journal":{"name":"2009 10th Workshop on Image Analysis for Multimedia Interactive Services","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131498987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Contextual information in virtual collaboration systems beyond current standards 超越当前标准的虚拟协作系统中的上下文信息
2009 10th Workshop on Image Analysis for Multimedia Interactive Services Pub Date : 2009-05-06 DOI: 10.1109/WIAMIS.2009.5031470
A. Carreras, M. T. Andrade, T. Masterton, H. K. Arachchi, V. Barbosa, S. Dogan, J. Delgado, A. Kondoz
{"title":"Contextual information in virtual collaboration systems beyond current standards","authors":"A. Carreras, M. T. Andrade, T. Masterton, H. K. Arachchi, V. Barbosa, S. Dogan, J. Delgado, A. Kondoz","doi":"10.1109/WIAMIS.2009.5031470","DOIUrl":"https://doi.org/10.1109/WIAMIS.2009.5031470","url":null,"abstract":"Context-aware applications are fast becoming popular as a means of enriching users' experiences in various multimedia content access and delivery scenarios. Nevertheless, the definition, identification, and representation of contextual information are still open issues that need to be addressed. In this paper, we briefly present our work developed within the VISNET II Network of Excellence (NoE) project on context-based content adaptation in Virtual Collaboration Systems (VCSs). Based on the conducted research, we conclude that MPEG-21 Digital Item Adaptation (DIA) is the most complete standardization initiative to represent context for content adaptation. However, tools defined in MPEG-21 DIA Usage Environment Descriptors (UEDs) are not adequate for Virtual Collaboration application scenarios, and thus, we propose potential extensions to the available UEDs.","PeriodicalId":233839,"journal":{"name":"2009 10th Workshop on Image Analysis for Multimedia Interactive Services","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124860478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Why the alternative PCA provides better performance for face recognition 为什么替代PCA能提供更好的人脸识别性能
2009 10th Workshop on Image Analysis for Multimedia Interactive Services Pub Date : 2009-05-06 DOI: 10.1109/WIAMIS.2009.5031454
I. Wijaya, K. Uchimura, Zhencheng Hu
{"title":"Why the alternative PCA provides better performance for face recognition","authors":"I. Wijaya, K. Uchimura, Zhencheng Hu","doi":"10.1109/WIAMIS.2009.5031454","DOIUrl":"https://doi.org/10.1109/WIAMIS.2009.5031454","url":null,"abstract":"This paper presents an alternative to PCA technique, called as APCA, which uses within class scatter rather than global covariance matrix. The APCA technique produces better features cluster than does common PCA (CPCA) because it keep the null spaces which contain good discriminant information. The proposed technique achieves better performance for both recognition rate and accuracy parameters than those of CPCA when it was tested using several databases (ITS-LAB., INDIA, ORL, and FERET).","PeriodicalId":233839,"journal":{"name":"2009 10th Workshop on Image Analysis for Multimedia Interactive Services","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121937232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信