2012 IEEE International Symposium on Multimedia最新文献

筛选
英文 中文
Using Low Level Gradient Channels for Computationally Efficient Object Detection and Its Application in Logo Detection 基于低梯度通道的高效目标检测及其在Logo检测中的应用
2012 IEEE International Symposium on Multimedia Pub Date : 2012-12-10 DOI: 10.1109/ISM.2012.51
Yu Chen, V. Thing
{"title":"Using Low Level Gradient Channels for Computationally Efficient Object Detection and Its Application in Logo Detection","authors":"Yu Chen, V. Thing","doi":"10.1109/ISM.2012.51","DOIUrl":"https://doi.org/10.1109/ISM.2012.51","url":null,"abstract":"We propose a logo detection approach which utilizes the Haar (Haar-like) features computed directly from the gradient orientation, gradient magnitude channels and the gray intensity channel to effectively and efficiently extract discriminating features for a variety of logo images. The major contributions of this work are two-fold: 1) we explicitly demonstrate that, with an optimized design and implementation, the considerable discrimination can be obtained from the simple features like the Haar features which are extracted directly from the low level gradient orientation and magnitude channels, 2) we proposed an effective and efficient logo detection approach by using the Haar features obtained directly from gradient orientation, magnitude, and gray image channels. The experimental results on the collected merchandise images of Louis Vuitton (LV) and Polo Ralph Lauren (PRL) products show promising applicabilities of our approach.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129760482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D Scene Generation by Learning from Examples 从例子中学习3D场景生成
2012 IEEE International Symposium on Multimedia Pub Date : 2012-12-10 DOI: 10.1109/ISM.2012.19
Mesfin Dema, H. Sari-Sarraf
{"title":"3D Scene Generation by Learning from Examples","authors":"Mesfin Dema, H. Sari-Sarraf","doi":"10.1109/ISM.2012.19","DOIUrl":"https://doi.org/10.1109/ISM.2012.19","url":null,"abstract":"Due to overwhelming use of 3D models in video games and virtual environments, there is a growing interest in 3D scene generation, scene understanding and 3D model retrieval. In this paper, we introduce a data-driven 3D scene generation approach from a Maximum Entropy (MaxEnt) model selection perspective. Using this model selection criterion, new scenes can be sampled by matching a set of contextual constraints that are extracted from training and synthesized scenes. Starting from a set of random synthesized configurations of objects in 3D, the MaxEnt distribution is iteratively sampled (using Metropolis sampling) and updated until the constraints between training and synthesized scenes match, indicating the generation of plausible synthesized 3D scenes. To illustrate the proposed methodology, we use 3D training desk scenes that are all composed of seven predefined objects with different position, scale and orientation arrangements. After applying the MaxEnt framework, the synthesized scenes show that the proposed strategy can generate reasonably similar scenes to the training examples without any human supervision during sampling. We would like to mention, however, that such an approach is not limited to desk scene generation as described here and can be extended to any 3D scene generation problem.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122071912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Quantifying the Makeup Effect in Female Faces and Its Applications for Age Estimation 女性面部化妆效果的量化及其在年龄估计中的应用
2012 IEEE International Symposium on Multimedia Pub Date : 2012-12-10 DOI: 10.1109/ISM.2012.29
Ranran Feng, B. Prabhakaran
{"title":"Quantifying the Makeup Effect in Female Faces and Its Applications for Age Estimation","authors":"Ranran Feng, B. Prabhakaran","doi":"10.1109/ISM.2012.29","DOIUrl":"https://doi.org/10.1109/ISM.2012.29","url":null,"abstract":"In this paper, a comprehensive statistical study of makeup effect on facial parts (skin, eyes, and lip) is conducted first. According to the statistical study, a method to detect whether makeup is applied or not based on input facial image is proposed, then the makeup effect is further quantified as Young Index (YI) for female age estimation. An age estimator with makeup effect considered is presented in this paper. Results from the experiments find that with the makeup effect considered, the method proposed in this paper can improve accuracy by 0.9-6.7% in CS (Cumulative Score) and 0.26-9.76 in MAE (Mean of Absolute Errors between the estimated age and the ground truth age labeled or acquired from the data) comparing with other age estimation methods.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124591783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A Motion-Sketch Based Video Retrieval Using MST-CSS Representation 基于MST-CSS表示的运动草图视频检索
2012 IEEE International Symposium on Multimedia Pub Date : 2012-12-10 DOI: 10.1109/ISM.2012.76
C. Chattopadhyay, Sukhendu Das
{"title":"A Motion-Sketch Based Video Retrieval Using MST-CSS Representation","authors":"C. Chattopadhyay, Sukhendu Das","doi":"10.1109/ISM.2012.76","DOIUrl":"https://doi.org/10.1109/ISM.2012.76","url":null,"abstract":"In this work, we propose a framework for a robust Content Based Video Retrieval (CBVR) system with free hand query sketches, using the Multi-Spectro Temporal-Curvature Scale Space (MST-CSS) representation. Our designed interface allows sketches to be drawn to depict the shape of the object in motion and its trajectory. We obtain the MST-CSS feature representation using these cues and match with a set of MST-CSS features generated offline from the video clips in the database (gallery). Results are displayed in rank ordered similarity. Experimentation with benchmark datasets shows promising results.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126537211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Mutual Information Based Stereo Correspondence in Extreme Cases 极端情况下基于互信息的立体对应
2012 IEEE International Symposium on Multimedia Pub Date : 2012-12-10 DOI: 10.1109/ISM.2012.46
Qing Tian, GuangJun Tian
{"title":"Mutual Information Based Stereo Correspondence in Extreme Cases","authors":"Qing Tian, GuangJun Tian","doi":"10.1109/ISM.2012.46","DOIUrl":"https://doi.org/10.1109/ISM.2012.46","url":null,"abstract":"Stereo correspondence is an ill-posed problem mainly due to matching ambiguity, which is especially serious in extreme cases where the corresponding relationship is unknown and can be very complicated. Mutual information (MI), which assumes no prior relationship on the matching pair, is a good solution to this problem. This paper proposes a context-aware mutual information and Markov Random Field (MRF) based approach with gradient information introduced into both the data term and the smoothness term of the MAP-MRF framework where such advanced techniques as graph cuts can be used to find an accurate disparity map. The results show that the proposed context-aware method outperforms non-MI and traditional MI-based methods both quantitatively and qualitatively in some extreme cases.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126438175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Tag Cloud++ - Scalable Tag Clouds for Arbitrary Layouts 标签云++ -可扩展的标签云任意布局
2012 IEEE International Symposium on Multimedia Pub Date : 2012-12-10 DOI: 10.1109/ISM.2012.66
Minwoo Park, D. Joshi, A. Loui
{"title":"Tag Cloud++ - Scalable Tag Clouds for Arbitrary Layouts","authors":"Minwoo Park, D. Joshi, A. Loui","doi":"10.1109/ISM.2012.66","DOIUrl":"https://doi.org/10.1109/ISM.2012.66","url":null,"abstract":"Tag-clouds are becoming extremely popular in multimedia community as media of exploration and expression. In this work, we take tag-cloud construction to a new level by allowing a tag-cloud to take any arbitrary shape while preserving some order of tags (here alphabetical). Our method guarantees non-overlap among words and ensures compact representation within specified shape. The experiments on a variety of input set of tags and shapes of the tag-clouds show that the proposed method is promising and has real-time performance. Finally, we show the applicability of our method with an application wherein the tag-clouds specific to places, people, and keywords are constructed and used for digital media selection within a social network domain.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131977589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Spatio-temporal Gaussian Mixture Model for Background Modeling 背景建模的时空高斯混合模型
2012 IEEE International Symposium on Multimedia Pub Date : 2012-12-10 DOI: 10.1109/ISM.2012.73
Y. Soh, Y. Hae, Intaek Kim
{"title":"Spatio-temporal Gaussian Mixture Model for Background Modeling","authors":"Y. Soh, Y. Hae, Intaek Kim","doi":"10.1109/ISM.2012.73","DOIUrl":"https://doi.org/10.1109/ISM.2012.73","url":null,"abstract":"Background subtraction is widely employed in the detection of moving objects when background does not show much dynamic behavior. Many background models have been proposed by researchers. Most of them analyses only temporal behavior of pixels and ignores spatial relations of neighborhood that may be a key to better separation of foreground from background when background has dynamic activities. To remedy, some researchers proposed spatio-temporal approaches usually in the block-based framework. Two recent reviews[1, 2] showed that temporal kernel density estimation(KDE) method and temporal Gaussian mixture model(GMM) perform about equally best among possible temporal background models. Spatio-temporal version of KDE was proposed. However, for GMM, explicit extension to spatio-temporal domain is not easily seen in the literature. In this paper, we propose an extension of GMM from temporal domain to spatio-temporal domain. We applied the methods to well known test sequences and found that the proposed outperforms the temporal GMM.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132073467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
DLH/CLLS: An Open, Extensible System Design for Prosuming Lecture Recordings and Integrating Multimedia Learning Ecosystems DLH/CLLS:一个开放的,可扩展的系统设计,用于生产讲座录音和集成多媒体学习生态系统
2012 IEEE International Symposium on Multimedia Pub Date : 2012-12-10 DOI: 10.1109/ISM.2012.97
Kai Michael Höver, Gundolf von Bachhaus, M. Hartle, M. Mühlhäuser
{"title":"DLH/CLLS: An Open, Extensible System Design for Prosuming Lecture Recordings and Integrating Multimedia Learning Ecosystems","authors":"Kai Michael Höver, Gundolf von Bachhaus, M. Hartle, M. Mühlhäuser","doi":"10.1109/ISM.2012.97","DOIUrl":"https://doi.org/10.1109/ISM.2012.97","url":null,"abstract":"The production of lecture recordings is becoming increasingly important for university education and highly appreciated by students. However, those lecture recordings and corresponding systems are only a subset of different kinds of learning materials and learning tools that exist in learning environments. This demands for learning system designs that are easily accessible, extensible, and open for the integration with other environments, data sources, and user (inter-)actions. The contributions of this paper is as follows: we suggest a system that supports educators in presenting, recording, and providing their lectures as well as a system design following Linked Data principles to facilitate integration and users to interact with both each other and learning materials.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127522687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
A Variational Bayesian Inference Framework for Multiview Depth Image Enhancement 多视角深度图像增强的变分贝叶斯推理框架
2012 IEEE International Symposium on Multimedia Pub Date : 2012-12-10 DOI: 10.1109/ISM.2012.44
P. Rana, Jalil Taghia, M. Flierl
{"title":"A Variational Bayesian Inference Framework for Multiview Depth Image Enhancement","authors":"P. Rana, Jalil Taghia, M. Flierl","doi":"10.1109/ISM.2012.44","DOIUrl":"https://doi.org/10.1109/ISM.2012.44","url":null,"abstract":"In this paper, a general model-based framework for multiview depth image enhancement is proposed. Depth imagery plays a pivotal role in emerging free-viewpoint television. This technology requires high quality virtual view synthesis to enable viewers to move freely in a dynamic real world scene. Depth imagery of different viewpoints is used to synthesize an arbitrary number of novel views. Usually, the depth imagery is estimated individually by stereo-matching algorithms and, hence, shows lack of inter-view consistency. This inconsistency affects the quality of view synthesis negatively. This paper enhances the inter-view consistency of multiview depth imagery by using a variational Bayesian inference framework. First, our approach classifies the color information in the multiview color imagery. Second, using the resulting color clusters, we classify the corresponding depth values in the multiview depth imagery. Each clustered depth image is subject to further sub clustering. Finally, the resulting mean of the sub-clusters is used to enhance the depth imagery at multiple viewpoints. Experiments show that our approach improves the quality of virtual views by up to 0.25 dB.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133131722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Video-Based Lane Detection Using a Fast Vanishing Point Estimation Method 基于视频的车道检测快速消失点估计方法
2012 IEEE International Symposium on Multimedia Pub Date : 2012-12-10 DOI: 10.1109/ISM.2012.70
Burak Benligiray, C. Topal, C. Akinlar
{"title":"Video-Based Lane Detection Using a Fast Vanishing Point Estimation Method","authors":"Burak Benligiray, C. Topal, C. Akinlar","doi":"10.1109/ISM.2012.70","DOIUrl":"https://doi.org/10.1109/ISM.2012.70","url":null,"abstract":"Lane detection algorithms constitute a basis for intelligent vehicle systems such as lane tracking and involuntary lane departure detection. In this paper, we propose a simple and video-based lane detection algorithm that uses a fast vanishing point estimation method. The first step of the algorithm is to extract and validate the line segments from the image with a recently proposed line detection algorithm. In the next step, an angle based elimination of line segments is done according to the perspective characteristics of lane markings. This basic operation removes many line segments that belong to irrelevant details on the scene and greatly reduces the number of features to be processed afterwards. Remaining line segments are extrapolated and superimposed to detect the image location where majority of the linear edge features converge. The location found by this efficient operation is assumed to be the vanishing point. Subsequently, an orientation-based removal is done by eliminating the line segments whose extensions do not intersect the vanishing point. The final step is clustering the remaining line segments such that each cluster represents a lane marking or a boundary of the road (i.e. sidewalks, barriers or shoulders). The properties of the line segments that constitute the clusters are fused to represent each cluster with a single line. The nearest two clusters to the vehicle are chosen as the lines that bound the lane that is being driven on. The proposed algorithm works in an average of 12 milliseconds for each frame with 640×480 resolution on a 2.20 GHz Intel CPU. This performance metric shows that the algorithm can be deployed on minimal hardware and still provide real-time performance.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124620418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信