2012 IEEE International Symposium on Multimedia最新文献

筛选
英文 中文
Quantifying the Makeup Effect in Female Faces and Its Applications for Age Estimation 女性面部化妆效果的量化及其在年龄估计中的应用
2012 IEEE International Symposium on Multimedia Pub Date : 2012-12-10 DOI: 10.1109/ISM.2012.29
Ranran Feng, B. Prabhakaran
{"title":"Quantifying the Makeup Effect in Female Faces and Its Applications for Age Estimation","authors":"Ranran Feng, B. Prabhakaran","doi":"10.1109/ISM.2012.29","DOIUrl":"https://doi.org/10.1109/ISM.2012.29","url":null,"abstract":"In this paper, a comprehensive statistical study of makeup effect on facial parts (skin, eyes, and lip) is conducted first. According to the statistical study, a method to detect whether makeup is applied or not based on input facial image is proposed, then the makeup effect is further quantified as Young Index (YI) for female age estimation. An age estimator with makeup effect considered is presented in this paper. Results from the experiments find that with the makeup effect considered, the method proposed in this paper can improve accuracy by 0.9-6.7% in CS (Cumulative Score) and 0.26-9.76 in MAE (Mean of Absolute Errors between the estimated age and the ground truth age labeled or acquired from the data) comparing with other age estimation methods.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124591783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Detection and Identification of Chimpanzee Faces in the Wild 野生黑猩猩面孔的检测与识别
2012 IEEE International Symposium on Multimedia Pub Date : 2012-12-10 DOI: 10.1109/ISM.2012.30
A. Loos, Andreas Ernst
{"title":"Detection and Identification of Chimpanzee Faces in the Wild","authors":"A. Loos, Andreas Ernst","doi":"10.1109/ISM.2012.30","DOIUrl":"https://doi.org/10.1109/ISM.2012.30","url":null,"abstract":"In this paper, we present and evaluate a unified automatic image-based face detection and identification framework using two datasets of captive and free-living chimpanzee individuals gathered in uncontrolled environments. This application scenario implicates several challenging problems like different lighting situations, various expressions, partial occlusion, and non-cooperative subjects. After the faces and facial feature points are detected, we use a projective transformation to align the face images. All faces are then identified using an appearance-based face recognition approach in combination with additional information from local regions of the apes' face. We conducted open-set identification experiments for both datasets. Even though, the datasets are very challenging, the system achieved promising results and therefore has the potential to open up new ways in effective biodiversity conservation management.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121224658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Energy Consumption Reduction via Context-Aware Mobile Video Pre-fetching 通过上下文感知移动视频预取降低能耗
2012 IEEE International Symposium on Multimedia Pub Date : 2012-12-10 DOI: 10.1109/ISM.2012.56
A. Devlic, P. Lungaro, P. Kamaraju, Z. Segall, Konrad Tollmar
{"title":"Energy Consumption Reduction via Context-Aware Mobile Video Pre-fetching","authors":"A. Devlic, P. Lungaro, P. Kamaraju, Z. Segall, Konrad Tollmar","doi":"10.1109/ISM.2012.56","DOIUrl":"https://doi.org/10.1109/ISM.2012.56","url":null,"abstract":"The arrival of smart phones and tablets, along with a flat rate mobile Internet pricing model have caused increasing adoption of mobile data services. According to recent studies, video has been the main driver of mobile data consumption, having a higher growth rate than any other mobile application. However, streaming a medium/high quality video files can be an issue in a mobile environment where available capacity needs to be shared among a large number of users. Additionally, the energy consumption in mobile devices increases proportionally with the duration of data transfers, which depend on the download data rates achievable by the device. In this respect, adoption of opportunistic content pre-fetching schemes that exploit times and locations with high data rates to deliver content before a user requests it, has the potential to reduce the energy consumption associated with content delivery and improve the user's quality of experience, by allowing playback of pre-stored content with virtually no perceived interruptions or delays. This paper presents a family of opportunistic content pre-fetching schemes and compares their performance to standard on-demand access to content. By adopting a simulation approach on experimental data, collected with monitoring software installed in mobile terminals, we show that content pre-fetching can reduce energy consumption of the mobile devices by up to 30% when compared to the on demand download of the same file, with a time window of 1 hour needed to complete the content prepositioning.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127519656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
DLH/CLLS: An Open, Extensible System Design for Prosuming Lecture Recordings and Integrating Multimedia Learning Ecosystems DLH/CLLS:一个开放的,可扩展的系统设计,用于生产讲座录音和集成多媒体学习生态系统
2012 IEEE International Symposium on Multimedia Pub Date : 2012-12-10 DOI: 10.1109/ISM.2012.97
Kai Michael Höver, Gundolf von Bachhaus, M. Hartle, M. Mühlhäuser
{"title":"DLH/CLLS: An Open, Extensible System Design for Prosuming Lecture Recordings and Integrating Multimedia Learning Ecosystems","authors":"Kai Michael Höver, Gundolf von Bachhaus, M. Hartle, M. Mühlhäuser","doi":"10.1109/ISM.2012.97","DOIUrl":"https://doi.org/10.1109/ISM.2012.97","url":null,"abstract":"The production of lecture recordings is becoming increasingly important for university education and highly appreciated by students. However, those lecture recordings and corresponding systems are only a subset of different kinds of learning materials and learning tools that exist in learning environments. This demands for learning system designs that are easily accessible, extensible, and open for the integration with other environments, data sources, and user (inter-)actions. The contributions of this paper is as follows: we suggest a system that supports educators in presenting, recording, and providing their lectures as well as a system design following Linked Data principles to facilitate integration and users to interact with both each other and learning materials.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127522687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Using Low Level Gradient Channels for Computationally Efficient Object Detection and Its Application in Logo Detection 基于低梯度通道的高效目标检测及其在Logo检测中的应用
2012 IEEE International Symposium on Multimedia Pub Date : 2012-12-10 DOI: 10.1109/ISM.2012.51
Yu Chen, V. Thing
{"title":"Using Low Level Gradient Channels for Computationally Efficient Object Detection and Its Application in Logo Detection","authors":"Yu Chen, V. Thing","doi":"10.1109/ISM.2012.51","DOIUrl":"https://doi.org/10.1109/ISM.2012.51","url":null,"abstract":"We propose a logo detection approach which utilizes the Haar (Haar-like) features computed directly from the gradient orientation, gradient magnitude channels and the gray intensity channel to effectively and efficiently extract discriminating features for a variety of logo images. The major contributions of this work are two-fold: 1) we explicitly demonstrate that, with an optimized design and implementation, the considerable discrimination can be obtained from the simple features like the Haar features which are extracted directly from the low level gradient orientation and magnitude channels, 2) we proposed an effective and efficient logo detection approach by using the Haar features obtained directly from gradient orientation, magnitude, and gray image channels. The experimental results on the collected merchandise images of Louis Vuitton (LV) and Polo Ralph Lauren (PRL) products show promising applicabilities of our approach.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129760482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effective Moving Object Detection and Retrieval via Integrating Spatial-Temporal Multimedia Information
2012 IEEE International Symposium on Multimedia Pub Date : 2012-12-10 DOI: 10.1109/ISM.2012.74
Dianting Liu, M. Shyu
{"title":"Effective Moving Object Detection and Retrieval via Integrating Spatial-Temporal Multimedia Information","authors":"Dianting Liu, M. Shyu","doi":"10.1109/ISM.2012.74","DOIUrl":"https://doi.org/10.1109/ISM.2012.74","url":null,"abstract":"In the area of multimedia semantic analysis and video retrieval, automatic object detection techniques play an important role. Without the analysis of the object-level features, it is hard to achieve high performance on semantic retrieval. As a branch of object detection study, moving object detection also becomes a hot research field and gets a great amount of progress recently. This paper proposes a moving object detection and retrieval model that integrates the spatial and temporal information in video sequences and uses the proposed integral density method (adopted from the idea of integral images) to quickly identify the motion regions in an unsupervised way. First, key information locations on video frames are achieved as maxima and minima of the result of Difference of Gaussian (DoG) function. On the other hand, a motion map of adjacent frames is obtained from the diversity of the outcomes from Simultaneous Partition and Class Parameter Estimation (SPCPE) framework. The motion map filters key information locations into key motion locations (KMLs) where the existence of moving objects is implied. Besides showing the motion zones, the motion map also indicates the motion direction which guides the proposed integral density approach to quickly and accurately locate the motion regions. The detection results are not only illustrated visually, but also verified by the promising experimental results which show the concept retrieval performance can be improved by integrating the global and local visual information.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129978763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Spatio-temporal Gaussian Mixture Model for Background Modeling 背景建模的时空高斯混合模型
2012 IEEE International Symposium on Multimedia Pub Date : 2012-12-10 DOI: 10.1109/ISM.2012.73
Y. Soh, Y. Hae, Intaek Kim
{"title":"Spatio-temporal Gaussian Mixture Model for Background Modeling","authors":"Y. Soh, Y. Hae, Intaek Kim","doi":"10.1109/ISM.2012.73","DOIUrl":"https://doi.org/10.1109/ISM.2012.73","url":null,"abstract":"Background subtraction is widely employed in the detection of moving objects when background does not show much dynamic behavior. Many background models have been proposed by researchers. Most of them analyses only temporal behavior of pixels and ignores spatial relations of neighborhood that may be a key to better separation of foreground from background when background has dynamic activities. To remedy, some researchers proposed spatio-temporal approaches usually in the block-based framework. Two recent reviews[1, 2] showed that temporal kernel density estimation(KDE) method and temporal Gaussian mixture model(GMM) perform about equally best among possible temporal background models. Spatio-temporal version of KDE was proposed. However, for GMM, explicit extension to spatio-temporal domain is not easily seen in the literature. In this paper, we propose an extension of GMM from temporal domain to spatio-temporal domain. We applied the methods to well known test sequences and found that the proposed outperforms the temporal GMM.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132073467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Tag Cloud++ - Scalable Tag Clouds for Arbitrary Layouts 标签云++ -可扩展的标签云任意布局
2012 IEEE International Symposium on Multimedia Pub Date : 2012-12-10 DOI: 10.1109/ISM.2012.66
Minwoo Park, D. Joshi, A. Loui
{"title":"Tag Cloud++ - Scalable Tag Clouds for Arbitrary Layouts","authors":"Minwoo Park, D. Joshi, A. Loui","doi":"10.1109/ISM.2012.66","DOIUrl":"https://doi.org/10.1109/ISM.2012.66","url":null,"abstract":"Tag-clouds are becoming extremely popular in multimedia community as media of exploration and expression. In this work, we take tag-cloud construction to a new level by allowing a tag-cloud to take any arbitrary shape while preserving some order of tags (here alphabetical). Our method guarantees non-overlap among words and ensures compact representation within specified shape. The experiments on a variety of input set of tags and shapes of the tag-clouds show that the proposed method is promising and has real-time performance. Finally, we show the applicability of our method with an application wherein the tag-clouds specific to places, people, and keywords are constructed and used for digital media selection within a social network domain.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131977589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Variational Bayesian Inference Framework for Multiview Depth Image Enhancement 多视角深度图像增强的变分贝叶斯推理框架
2012 IEEE International Symposium on Multimedia Pub Date : 2012-12-10 DOI: 10.1109/ISM.2012.44
P. Rana, Jalil Taghia, M. Flierl
{"title":"A Variational Bayesian Inference Framework for Multiview Depth Image Enhancement","authors":"P. Rana, Jalil Taghia, M. Flierl","doi":"10.1109/ISM.2012.44","DOIUrl":"https://doi.org/10.1109/ISM.2012.44","url":null,"abstract":"In this paper, a general model-based framework for multiview depth image enhancement is proposed. Depth imagery plays a pivotal role in emerging free-viewpoint television. This technology requires high quality virtual view synthesis to enable viewers to move freely in a dynamic real world scene. Depth imagery of different viewpoints is used to synthesize an arbitrary number of novel views. Usually, the depth imagery is estimated individually by stereo-matching algorithms and, hence, shows lack of inter-view consistency. This inconsistency affects the quality of view synthesis negatively. This paper enhances the inter-view consistency of multiview depth imagery by using a variational Bayesian inference framework. First, our approach classifies the color information in the multiview color imagery. Second, using the resulting color clusters, we classify the corresponding depth values in the multiview depth imagery. Each clustered depth image is subject to further sub clustering. Finally, the resulting mean of the sub-clusters is used to enhance the depth imagery at multiple viewpoints. Experiments show that our approach improves the quality of virtual views by up to 0.25 dB.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133131722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Video-Based Lane Detection Using a Fast Vanishing Point Estimation Method 基于视频的车道检测快速消失点估计方法
2012 IEEE International Symposium on Multimedia Pub Date : 2012-12-10 DOI: 10.1109/ISM.2012.70
Burak Benligiray, C. Topal, C. Akinlar
{"title":"Video-Based Lane Detection Using a Fast Vanishing Point Estimation Method","authors":"Burak Benligiray, C. Topal, C. Akinlar","doi":"10.1109/ISM.2012.70","DOIUrl":"https://doi.org/10.1109/ISM.2012.70","url":null,"abstract":"Lane detection algorithms constitute a basis for intelligent vehicle systems such as lane tracking and involuntary lane departure detection. In this paper, we propose a simple and video-based lane detection algorithm that uses a fast vanishing point estimation method. The first step of the algorithm is to extract and validate the line segments from the image with a recently proposed line detection algorithm. In the next step, an angle based elimination of line segments is done according to the perspective characteristics of lane markings. This basic operation removes many line segments that belong to irrelevant details on the scene and greatly reduces the number of features to be processed afterwards. Remaining line segments are extrapolated and superimposed to detect the image location where majority of the linear edge features converge. The location found by this efficient operation is assumed to be the vanishing point. Subsequently, an orientation-based removal is done by eliminating the line segments whose extensions do not intersect the vanishing point. The final step is clustering the remaining line segments such that each cluster represents a lane marking or a boundary of the road (i.e. sidewalks, barriers or shoulders). The properties of the line segments that constitute the clusters are fused to represent each cluster with a single line. The nearest two clusters to the vehicle are chosen as the lines that bound the lane that is being driven on. The proposed algorithm works in an average of 12 milliseconds for each frame with 640×480 resolution on a 2.20 GHz Intel CPU. This performance metric shows that the algorithm can be deployed on minimal hardware and still provide real-time performance.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124620418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信