2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)最新文献

筛选
英文 中文
A comparison study on motorcycle license plate detection 摩托车车牌检测的比较研究
2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW) Pub Date : 2015-07-30 DOI: 10.1109/ICMEW.2015.7169772
G. Hsu, Si-De Zeng, C. Chiu, Sheng-Luen Chung
{"title":"A comparison study on motorcycle license plate detection","authors":"G. Hsu, Si-De Zeng, C. Chiu, Sheng-Luen Chung","doi":"10.1109/ICMEW.2015.7169772","DOIUrl":"https://doi.org/10.1109/ICMEW.2015.7169772","url":null,"abstract":"License plate detection and recognition are mostly studied on automobiles but only few on motorcycles. As motorcycles are becoming popular for local transportation and environmental friendliness, the demands for license plate recognition have been increasing in recent years. The primary difference between the license plate recognition in automobiles and motorcycles is on the detection of license plates, which is the topic of this study. For automobiles, the license plates are mostly installed on the front or on the back of the vehicle with relatively less complicated backgrounds; however, for motorcycles, the backgrounds can be far more complicated. To better handle complicated backgrounds, we study the case with motorcycle detection as preprocessing so that the search area for the license plate can be better constrained, and compare its performance with the case without the preprocessing. A few detection methods are configured and studied for both the motorcycle detection and license plate detection, including the state-of-the-art part-based model. Considering processing speed and accuracy, the histogram of oriented gradients (HOG) with support vector machines (SVMs) is found to be the best detector for motorcycle license plates.","PeriodicalId":388471,"journal":{"name":"2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128537478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Compact hash codes and data structures for efficient mobile visual search 紧凑的哈希码和数据结构,高效的移动视觉搜索
2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW) Pub Date : 2015-07-30 DOI: 10.1109/ICMEW.2015.7169856
Simone Ercoli, M. Bertini, A. Bimbo
{"title":"Compact hash codes and data structures for efficient mobile visual search","authors":"Simone Ercoli, M. Bertini, A. Bimbo","doi":"10.1109/ICMEW.2015.7169856","DOIUrl":"https://doi.org/10.1109/ICMEW.2015.7169856","url":null,"abstract":"In this paper we present an efficient method for mobile visual search that exploits compact hash codes and data structures for visual features retrieval. The method has been tested on a large scale standard dataset of one million SIFT features, showing a retrieval performance comparable or superior to state-of-the-art methods, and a very high efficiency in terms of memory consumption and computational requirements. These characteristics make it suitable for application to mobile visual search, where devices have limited computational and memory capabilities.","PeriodicalId":388471,"journal":{"name":"2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129841175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Enhanced terminal for secure mobile communication over tetra and tetrapol networks 在四环素和四环素网络上安全移动通信的增强终端
2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW) Pub Date : 2015-07-30 DOI: 10.1109/ICMEW.2015.7169832
Federico Colangelo, F. Battisti, M. Carli, A. Neri, F. Frosali, C. Olivieri
{"title":"Enhanced terminal for secure mobile communication over tetra and tetrapol networks","authors":"Federico Colangelo, F. Battisti, M. Carli, A. Neri, F. Frosali, C. Olivieri","doi":"10.1109/ICMEW.2015.7169832","DOIUrl":"https://doi.org/10.1109/ICMEW.2015.7169832","url":null,"abstract":"In this contribution the design of a bi-technological enhanced terminal for secure mobile communication is presented. It is based on the TETRA and TETRAPOL communication networks, that are currently used by public and private bodies as network infrastructures and are particularly exploited in case of emergency. The proposed system allows the use of both, TETRA and TETRAPOL, low level communications layers. Furthermore, the proposed terminal introduces high level capabilities that enhance the basic functionalities of these communication systems, while preserving the intrinsics security of the standards. The implementation of the proposed terminal architecture is the basis for the development of an integrated European network where emergency services and first responders share communications, processes, and a legal framework, to be used in case of natural disasters and security threats. In this contribution, the terminal architecture and the security analysis are introduced and discussed.","PeriodicalId":388471,"journal":{"name":"2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129959535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
QoS-driven multipath routing for on-demand video streaming in a Publish-Subscribe Internet 面向发布-订阅Internet中点播视频流的qos驱动多路径路由
2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW) Pub Date : 2015-07-30 DOI: 10.1109/ICMEW.2015.7169758
Y. Thomas, P. A. Frangoudis, George C. Polyzos
{"title":"QoS-driven multipath routing for on-demand video streaming in a Publish-Subscribe Internet","authors":"Y. Thomas, P. A. Frangoudis, George C. Polyzos","doi":"10.1109/ICMEW.2015.7169758","DOIUrl":"https://doi.org/10.1109/ICMEW.2015.7169758","url":null,"abstract":"Aiming to improve the performance of on-demand video streaming services in an Information-Centric Network, we propose a mechanism for selecting multiple delivery paths, satisfying bandwidth, error rate and number-of-paths constraints. Our scheme is developed in the context of the Publish-Subscribe Internet architecture and is shown to outperform state-of-the-art multi-constrained multipath selection mechanisms by up to 7%, and single-path or single-constrained multipath selection schemes by up to 17%, in terms of feasible path discovery, while at the same time improving on bandwidth aggregation. Also, it is suitable for supporting resource-demanding high-definition scalable video streaming services, offering Quality-of-Experience gains.","PeriodicalId":388471,"journal":{"name":"2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115143137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
From Kinect video to realistic and animatable MPEG-4 face model: A complete framework 从Kinect视频到逼真和可动画的MPEG-4面部模型:一个完整的框架
2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW) Pub Date : 2015-07-30 DOI: 10.1109/ICMEW.2015.7169783
Laura Turban, Denis Girard, N. Kose, J. Dugelay
{"title":"From Kinect video to realistic and animatable MPEG-4 face model: A complete framework","authors":"Laura Turban, Denis Girard, N. Kose, J. Dugelay","doi":"10.1109/ICMEW.2015.7169783","DOIUrl":"https://doi.org/10.1109/ICMEW.2015.7169783","url":null,"abstract":"The recent success of the Kinect sensor has a significant impact on 3D data based computer applications. This study aims to obtain MPEG-4 compliant realistic and animatable face models from Kinect video. The complete framework for this process includes initially the computation of high quality 3D scans from RGB-D Kinect video, and then the computation of animatable MPEG-4 face models using these high quality scans. This study shows that it is possible to obtain high quality 3D scans and realistic and animatable face models of subjects using lower quality Kinect data.","PeriodicalId":388471,"journal":{"name":"2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133233714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Representative photo selection for restaurants in food blogs 美食博客餐厅代表性照片精选
2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW) Pub Date : 2015-07-28 DOI: 10.1109/ICMEW.2015.7169814
Yifan Chang, Hung-Yi Lo, Min-Shan Huang, Min-Chun Hu
{"title":"Representative photo selection for restaurants in food blogs","authors":"Yifan Chang, Hung-Yi Lo, Min-Shan Huang, Min-Chun Hu","doi":"10.1109/ICMEW.2015.7169814","DOIUrl":"https://doi.org/10.1109/ICMEW.2015.7169814","url":null,"abstract":"Nowadays, people write comments of restaurants and upload related photos to food blogs after visiting there. Developing a mobile application which enables the user to effectively search restaurants from data in these blogs becomes an emerging need. Besides reading the comments, most people will give a glance at food photos of a restaurant and then decide whether to go or what to eat. Therefore, we propose a system to analyze and select representative photos for each restaurant based on blog-platform media. A strong food detection model is trained to retrieve food photos and an aesthetic quality assessment method is utilized to select representative photos. Based on these representative photos, users can more easily have the impression of the restaurant and review the blog in an organized way. The experimental results show that our system can generate better representative photos (i.e. much closer to the users' preferences) than existing blog platforms.","PeriodicalId":388471,"journal":{"name":"2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116548761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A fast and robust emotion recognition system for real-world mobile phone data 一个快速和强大的情感识别系统,为现实世界的移动电话数据
2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW) Pub Date : 2015-06-01 DOI: 10.1109/ICMEW.2015.7169787
S. Velusamy, Viswanath Gopalakrishnan, B. Anand, P. Chiranjeevi, Basant Kumar Pandey, Pratibha Moogi
{"title":"A fast and robust emotion recognition system for real-world mobile phone data","authors":"S. Velusamy, Viswanath Gopalakrishnan, B. Anand, P. Chiranjeevi, Basant Kumar Pandey, Pratibha Moogi","doi":"10.1109/ICMEW.2015.7169787","DOIUrl":"https://doi.org/10.1109/ICMEW.2015.7169787","url":null,"abstract":"Recognizing emotions of a user while interacting with smart devices like tablets and mobile phones is a prospective computer vision problem. They are used in a variety of applications like web browsing, multimedia content playing, gaming, etc., involving human interactions. We present an emotion recognition framework that analyze the facial expressions of a mobile phone user, under various real-world mobile data challenges like variations in lighting, head pose, expression, user/device movement, and computational complexity. The proposed system includes: (i) Personalized facial points tracking algorithm to suit mobile captured data; (ii) Temporal filter that pre-selects probable emotional frames from the input sequence for further processing, in-order to reduce the processing load; (iii) Face registration and operating region selection for compact facial action unit (AU) representation; (iv) Discriminative feature description of AUs that is robust to illumination changes and face angles; and (v) AU classification and intelligent mapping of the predicted AUs to target emotions. We compare the performance of the proposed ER system with the key state-of-the-art techniques and show a significant improvement on benchmark databases like CK+, ISL, FACS, JAFFE, MultiPie, MindReading, and also on our internally collected mobile phone data set.","PeriodicalId":388471,"journal":{"name":"2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124913243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
MULTISENSOR: Development of multimedia content integration technologies for journalism, media monitoring and international exporting decision support MULTISENSOR:为新闻、媒体监控和国际出口决策支持开发多媒体内容集成技术
2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW) Pub Date : 2015-06-01 DOI: 10.1109/ICMEW.2015.7169818
S. Vrochidis, Y. Kompatsiaris, Gerard Casamayor, I. Arapakis, R. Busch, V. Alexiev, E. Jamin, Michael Jugov, Nicolaus Heise, Teresa Forrellat, Dimitris Liparas, Leo Wanner, Iris Miliaraki, V. Aleksic, K. Simov, Alan Mas Soro, Mirja Eckhoff, Tilman Wagner, M. Puigbó
{"title":"MULTISENSOR: Development of multimedia content integration technologies for journalism, media monitoring and international exporting decision support","authors":"S. Vrochidis, Y. Kompatsiaris, Gerard Casamayor, I. Arapakis, R. Busch, V. Alexiev, E. Jamin, Michael Jugov, Nicolaus Heise, Teresa Forrellat, Dimitris Liparas, Leo Wanner, Iris Miliaraki, V. Aleksic, K. Simov, Alan Mas Soro, Mirja Eckhoff, Tilman Wagner, M. Puigbó","doi":"10.1109/ICMEW.2015.7169818","DOIUrl":"https://doi.org/10.1109/ICMEW.2015.7169818","url":null,"abstract":"This paper presents an overview and the first results of the FP7 MULTISENSOR project, which deals with multidimensional content integration of multimedia content for intelligent sentiment enriched and context oriented interpretation. MULTISENSOR aims at providing unified access to multilingual, multimedia and multicultural economic, news story material across borders in order to support journalism and media monitoring tasks and provide decision support for internationalisation of companies.","PeriodicalId":388471,"journal":{"name":"2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130298800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A playback continuity driven resource allocation scheme based on the S-P model for video streaming clients 基于S-P模型的视频流客户端播放连续性驱动资源分配方案
2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW) Pub Date : 2015-06-01 DOI: 10.1109/ICMEW.2015.7169782
Lijun He, Guizhong Liu
{"title":"A playback continuity driven resource allocation scheme based on the S-P model for video streaming clients","authors":"Lijun He, Guizhong Liu","doi":"10.1109/ICMEW.2015.7169782","DOIUrl":"https://doi.org/10.1109/ICMEW.2015.7169782","url":null,"abstract":"In this paper, we explore the relationship between the sum size of the video packets of each segment already received by a client and how long the continuous playback time these video packets can support. By using the curve-fitting method, we obtain S-P (Size-Playback time) models for all the video segments. On the basis of the S-P models, we propose a novel resource allocation scheme to improve the playback continuity for the clients. First the transmission capacity requirements of all the clients are calculated. Then a new mathematical model is formulated to maximize the total playback time that the packets to be scheduled can support, subject to the transmission capacity requirements of the clients and the network resource constraints. To solve the problem, a resource allocation scheme with low complexity is proposed. Simulation results show that our proposed algorithm can efficiently improve the playback continuity compared with other existing algorithms.","PeriodicalId":388471,"journal":{"name":"2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130527741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Video saliency map detection based on global motion estimation 基于全局运动估计的视频显著性地图检测
2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW) Pub Date : 2015-06-01 DOI: 10.1109/ICMEW.2015.7169845
Jun Xu, Qin Tu, Cuiwei Li, Ran Gao, Aidong Men
{"title":"Video saliency map detection based on global motion estimation","authors":"Jun Xu, Qin Tu, Cuiwei Li, Ran Gao, Aidong Men","doi":"10.1109/ICMEW.2015.7169845","DOIUrl":"https://doi.org/10.1109/ICMEW.2015.7169845","url":null,"abstract":"Saliency detection in videos has attracted great attention in recent years due to its wide range of applications, such as object detection and recognition. A novel spatiotemporal saliency detection model is proposed in this paper. The discrete cosine transform coefficients are used as features to generate the spatial saliency maps firstly. Then, a hierarchical structure is utilized to filter motion vectors that might belong to the background. The extracted motion vectors can be used to obtain the rough temporal saliency map. In addition, there are still some outliers in the temporal saliency map and we use the macro-block information to revise it. Finally, an adaptive fusion method is used to merge the spatial and temporal saliency maps of each frame into its spatiotemporal saliency map. The proposed spatiotemporal saliency detection model has been extensively tested on several video sequences, and show to outperform (more than 0.127 in AUC and 0.182 in F-measure on average) various state-of-the-art models.","PeriodicalId":388471,"journal":{"name":"2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114315782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信