2012 IEEE International Conference on Multimedia and Expo Workshops最新文献

筛选
英文 中文
Contextual Dominant Color Name Extraction for Web Image Search 上下文主色名称提取的网络图像搜索
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.61
Peng Wang, Dongqing Zhang, Gang Zeng, Jingdong Wang
{"title":"Contextual Dominant Color Name Extraction for Web Image Search","authors":"Peng Wang, Dongqing Zhang, Gang Zeng, Jingdong Wang","doi":"10.1109/ICMEW.2012.61","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.61","url":null,"abstract":"This paper addresses the problem of extracting perceptually dominant color names of images. Our approach is motivated by the principle that the pixels corresponding to one dominant color name identified by human are often context dependent, spatially connected and form a perceptually meaningful region. Our algorithm first learns the probabilistic mapping from a RGB color to a color name. Then, a double-threshold approach is utilized to determine the color name of a RGB pixel in a specific image by considering its neighboring pixels. This scheme effectively deals with the pixels ambiguously belonging to several dominant color names. Last, the saliency information is combined to extract perceptually dominant colors. Experiments on our labeled image data set and the Ebay image set demonstrate the effectiveness of our approach.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127075219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Improved Image Retargeting by Distinguishing between Faces in Focus and Out of Focus 通过区分对焦和失焦的人脸来改进图像重定位
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.32
J. Kiess, Rodrigo Garcia, S. Kopf, W. Effelsberg
{"title":"Improved Image Retargeting by Distinguishing between Faces in Focus and Out of Focus","authors":"J. Kiess, Rodrigo Garcia, S. Kopf, W. Effelsberg","doi":"10.1109/ICMEW.2012.32","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.32","url":null,"abstract":"The identification of relevant objects in an image is highly relevant in the context of image retargeting. Especially faces draw the attention of viewers. But the level of relevance may change between different faces depending on the size, the location, or whether a face is in focus or not. In this paper, we present a novel algorithm which distinguishes in-focus and out-of-focus faces. A face detector with multiple cascades is used first to locate initial face regions. We analyze the ratio of strong edges in each face region to classify out-of-focus faces. Finally, we use the Grab Cut algorithm to segment the faces and define binary face masks. These masks can then be used as an additional input to image retargeting algorithms.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114465469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Social Photo Tagging Recommendation Using Community-Based Group Associations 社会照片标签推荐使用社区为基础的团体协会
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.46
Chien-Li Chou, Yee-Choy Chean, Yi-Cheng Chen, Hua-Tsung Chen, Suh-Yin Lee
{"title":"Social Photo Tagging Recommendation Using Community-Based Group Associations","authors":"Chien-Li Chou, Yee-Choy Chean, Yi-Cheng Chen, Hua-Tsung Chen, Suh-Yin Lee","doi":"10.1109/ICMEW.2012.46","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.46","url":null,"abstract":"In the social network, living photos occupy a large portion of web contents. For sharing a photo with the people appearing in that, users have to manually tag the people with their names, and the social network system links the photo to the people immediately. However, tagging the photos manually is a time-consuming task while people take thousands of photos in their daily life. Therefore, more and more researchers put their eyes on how to recommend tags for a photo. In this paper, our goal is to recommend tags for a query photo with one tagged face. We fuse the results of face recognition and the user's relationships obtained from social contexts. In addition, the Community-Based Group Associations, called CBGA, is proposed to discover the group associations among users through the community detection. Finally, the experimental evaluations show that the performance of photo tagging recommendation is improved by combining the face recognition and social relationship. Furthermore, the proposed framework achieves the high quality for social photo tagging recommendation.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131093898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
How Many Frames Does Facial Expression Recognition Require? 面部表情识别需要多少帧?
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.56
Kaimin Yu, Zhiyong Wang, Genliang Guan, Qiuxia Wu, Z. Chi, D. Feng
{"title":"How Many Frames Does Facial Expression Recognition Require?","authors":"Kaimin Yu, Zhiyong Wang, Genliang Guan, Qiuxia Wu, Z. Chi, D. Feng","doi":"10.1109/ICMEW.2012.56","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.56","url":null,"abstract":"Facial expression analysis is essential to enable socially intelligent processing of multimedia video content. Most facial expression recognition algorithms generally analyze the whole image sequence of an expression to exploit its temporal characteristics. However, it is seldom studied whether it is necessary to utilize all the frames of a sequence, since human beings are able to capture the dynamics of facial expressions from very short sequences (even only one frame). In this paper, we investigate the impact of the number of frames in a facial expression sequence on facial expression recognition accuracy. In particular, we develop a key frame selection method through key point based frame representation. Experimental results on the popular CK facial expression dataset indicate that recognition accuracy achieved with half of the sequence frames is comparable to that of utilizing all the sequence frames. Our key frame selection method can further reduce the number of frames without clearly compromising recognition accuracy.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123529684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Human Gesture Analysis Using Multimodal Features 基于多模态特征的人类手势分析
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.88
Dan Luo, H. K. Ekenel, J. Ohya
{"title":"Human Gesture Analysis Using Multimodal Features","authors":"Dan Luo, H. K. Ekenel, J. Ohya","doi":"10.1109/ICMEW.2012.88","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.88","url":null,"abstract":"Human gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and PLS is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124609302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Resource Allocation for Service Composition in Cloud-based Video Surveillance Platform 基于云的视频监控平台业务组合的资源分配
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.77
M. S. Hossain, M. Hassan, Muhammad Al-Qurishi, A. Alghamdi
{"title":"Resource Allocation for Service Composition in Cloud-based Video Surveillance Platform","authors":"M. S. Hossain, M. Hassan, Muhammad Al-Qurishi, A. Alghamdi","doi":"10.1109/ICMEW.2012.77","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.77","url":null,"abstract":"Resource allocation play an important role in service composition for cloud-based video surveillance platform. In this platform, the utilization of computational resources is managed through accessing various services from Virtual Machine (VM) resources. A single service accessed from VMs running inside such a cloud platform may not cater the application demands of all surveillance users. Services require to be modeled as a value added composite service. In order to provide such a composite service to the customer, VM resources need to be utilized optimally so that QoS requirements is fulfilled. In order to optimize the VM resource allocation, we have used linear programming approach as well as heuristics. The simulation results show that our approach outperforms the existing VM allocation schemes in a cloud-based video surveillance environment, in terms of cost and response time.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132606073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 63
Extracting Context Information from Microblog Based on Analysis of Online Reviews 基于在线评论分析的微博语境信息提取
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.49
T. Takehara, Shohei Miki, Naoko Nitta, N. Babaguchi
{"title":"Extracting Context Information from Microblog Based on Analysis of Online Reviews","authors":"T. Takehara, Shohei Miki, Naoko Nitta, N. Babaguchi","doi":"10.1109/ICMEW.2012.49","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.49","url":null,"abstract":"Recommender systems automatically determine suitable items for users. Although preferences or context of users have been widely utilized in order to evaluate the suitability of the items for users, the surrounding context have little been considered. Focusing on that many ordinary human beings voluntarily report their observations of the current situation of the world to microblogs, this paper proposes a recommender system which not only recommends suitable restaurants to users based on their preferences and context but also provides the surrounding context information reported to microblogs which will further affect the users' restaurant selection behaviors. In particular, considering that such influential surrounding context information in microblogs includes keywords related to restaurant assessment, we propose a method for automatically determining the keywords to extract the context information by analyzing online reviews, which have been gathered also from ordinary human beings over a long period of time. The experiments by using Twitter as microblogs and Tabelog, a popular online restaurant review site in Japan, to obtain online reviews, indicated that the influential context information can be extracted from Twitter with the highest recall of 93.3% by using the area-related keywords. Additionally using the restaurant-related keywords was effective in removing irrelevant information obtaining the precision of 15.9%.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129016281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Query by Humming by Using Locality Sensitive Hashing Based on Combination of Pitch and Note 基于音高和音符组合的局部敏感哈希算法的哼唱查询
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.58
Qiang Wang, Zhiyuan Guo, Gang Liu, Jun Guo, Yueming Lu
{"title":"Query by Humming by Using Locality Sensitive Hashing Based on Combination of Pitch and Note","authors":"Qiang Wang, Zhiyuan Guo, Gang Liu, Jun Guo, Yueming Lu","doi":"10.1109/ICMEW.2012.58","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.58","url":null,"abstract":"Query by humming (QBH) is a technique that is used for content-based music information retrieval. It is a challenging unsolved problem due to humming errors. In this paper a novel retrieval method called note-based locality sensitive hashing (NLSH) is presented and it is combined with pitch-based locality sensitive hashing (PLSH) to screen candidate fragments. The method extracts PLSH and NLSH vectors from the database to construct two indexes. In the phase of retrieval, it automatically extracts vectors similar to the index construction and searches the indexes to obtain a list of candidates. Then recursive alignment (RA) is executed on these surviving candidates. Experiments are conducted on a database of 5,000 MIDI files with the 2010 MIREX-QBH query corpus. The results show by using the combination approach the relatively improvements of mean reciprocal rank are 29.7% (humming from anywhere) and 23.8% (humming from beginning), respectively, compared with the current state-of-the-art method.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125651824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Intelligent Vehicle Detection and Tracking for Highway Driving 面向高速公路行驶的智能车辆检测与跟踪
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.19
Wanxin Xu, Meikang Qiu, Zhi Chen, Hai Su
{"title":"Intelligent Vehicle Detection and Tracking for Highway Driving","authors":"Wanxin Xu, Meikang Qiu, Zhi Chen, Hai Su","doi":"10.1109/ICMEW.2012.19","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.19","url":null,"abstract":"Due to the increment of vehicles, the traffic jamming in cities becomes a serious challenge and the safety of people is threatened. Intelligent transportation system (ITS) and intelligent vehicles are critical to the efficiency of city transportation. In the area related with ITS and intelligent vehicles, moving vehicle detection and tracking are the most challenging problems. In this paper, we propose a framework for vehicle detection and tracking and make an in-depth research in key algorithms and techniques. We also conduct a serial of experiments on the basis of the existing results. Experimental results show that our proposed approach is feasible and effective for vehicle detection and tracking.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134329401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Research Design for Evaluating How to Engage Students with Urban Public Screens in Students' Neighbourhoods 评估如何让学生参与学生社区的城市公共屏幕的研究设计
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.68
A. Lugmayr, Yuan Fu
{"title":"Research Design for Evaluating How to Engage Students with Urban Public Screens in Students' Neighbourhoods","authors":"A. Lugmayr, Yuan Fu","doi":"10.1109/ICMEW.2012.68","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.68","url":null,"abstract":"Public screens are spreading throughout urban residential environments - in busses, trains, shopping centers, or at bus stops. Currently they are mostly used for advertising purposes, however, within the scope of this publication we focus on a new non apparent application area: the application of public screens in student villages. However, with new emerging innovative technologies and the increasing demand of students to use the latest technologies, there is a need and desire to bridge residents and businesses in the local vicinity. In addition, social networks shall help to foster a deeper integration of the community and its services. We present a study of the usage of public screen environments in the student vicinity of Kelvin Grove, Brisbane, Australia and Hervanta, Tampere, Finland. The study had three goals: (1) interviews with business owners to evaluate their needs for content and services, (2) student questioner to gain insights into consumer desires and expectations, (3) development of a roadmap and service concepts for public screens in student vicinities.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114718867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信