2012 IEEE International Conference on Multimedia and Expo Workshops最新文献

筛选
英文 中文
A Model-driven Approach for Integration of Interactive Applications and Web Services: A Case Study in Interactive Digital TV Platform 交互式应用与Web服务集成的模型驱动方法:以交互式数字电视平台为例
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.52
R. Kulesza, S. Meira, T. Ferreira, Eduardo S. M. Alexandre, Guido Lemos de Souza Filho, M. C. M. Neto, Celso A. S. Santos
{"title":"A Model-driven Approach for Integration of Interactive Applications and Web Services: A Case Study in Interactive Digital TV Platform","authors":"R. Kulesza, S. Meira, T. Ferreira, Eduardo S. M. Alexandre, Guido Lemos de Souza Filho, M. C. M. Neto, Celso A. S. Santos","doi":"10.1109/ICMEW.2012.52","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.52","url":null,"abstract":"This work proposed a model-driven development approach related to interactive multimedia applications and Web services integration. It is based on extension of an existing modeling language, which integrates modeling concepts for interactive applications and adds support for Web Services. Three Interactive Digital TV applications were modeled and developed. As we show, the evaluation of the approach brought benefits not supported by related works, like requirements structuring and reducing amount of work needed to finalize the code generated.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122322239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Traffic Congestion Classification for Nighttime Surveillance Videos 夜间监控视频的交通拥堵分类
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.36
Hua-Tsung Chen, Li-Wu Tsai, Hui-Zhen Gu, Suh-Yin Lee, B. Lin
{"title":"Traffic Congestion Classification for Nighttime Surveillance Videos","authors":"Hua-Tsung Chen, Li-Wu Tsai, Hui-Zhen Gu, Suh-Yin Lee, B. Lin","doi":"10.1109/ICMEW.2012.36","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.36","url":null,"abstract":"Traffic surveillance systems have been widely used for traffic monitoring. If the degree of traffic congestion can be evaluated from the surveillance videos immediately, the drivers can choose alternate routes to avoid traffic jam when traffic congestion arises. Compared to daytime surveillance, some tough factors such as poor visibility and higher noise increase the difficulty in video understanding under nighttime environments. In this paper, we propose a framework of traffic congestion classification for nighttime surveillance videos. The framework consists of three steps: the first one is to detect headlights based on three salient headlight features. Second, headlights are grouped into individual vehicles by evaluating their correlations. Third, a virtual detection line is adopted to gather the traffic information for traffic congestion evaluation. Then the traffic congestion is classified into five levels: jam, heavy, medium, mild and low in real-time. We use freeway nighttime surveillance videos to demonstrate the performances on accuracy and computation. Satisfactory experimental results validate the effectiveness of the proposed framework.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127055126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Cross-Layered Hidden Markov Modeling for Surveillance Event Recognition 监视事件识别的跨层隐马尔可夫模型
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.37
Chongyang Zhang, Jingbang Qiu, Shibao Zheng, Xiaokang Yang
{"title":"Cross-Layered Hidden Markov Modeling for Surveillance Event Recognition","authors":"Chongyang Zhang, Jingbang Qiu, Shibao Zheng, Xiaokang Yang","doi":"10.1109/ICMEW.2012.37","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.37","url":null,"abstract":"In this paper, a novel Cross-Layered Hidden Markov Model (CLHMM) is proposed for high accuracy and low complexity Surveillance Event Recognition (SER). Unlike existing Layered HMM (LHMM) whose inferences are limited in adjacent layers, cross-layer inferences are designed in CLHMM to strengthen reasoning efficiency and reduce computational complexity. One Common Feature Particle Set (CFPS) is also developed in CLHMM to offer the model an assembly of pixel level observations, expert knowledge and Baum-Welch algorithm are combined to achieve optimized performance in CLHMM learning. Experimental results on typical surveillance test sequences showed that CLHMM outperforms LHMM in terms of accuracy and computational complexity.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131042236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Mobile TV with Long Time Interleaving and Fast Zapping 具有长时间交错和快速切换的移动电视
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.114
C. Hellge, Valentina Pullano, M. Hensel, G. Corazza, T. Schierl, T. Wiegand
{"title":"Mobile TV with Long Time Interleaving and Fast Zapping","authors":"C. Hellge, Valentina Pullano, M. Hensel, G. Corazza, T. Schierl, T. Wiegand","doi":"10.1109/ICMEW.2012.114","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.114","url":null,"abstract":"The main challenge for provisioning of Mobile TV services is to overcome long burst errors as are often found in mobile reception conditions. Long time interleaving can be implemented by means of Application Layer FEC (AL-FEC) to increase the time diversity of the signal and thereby its robustness against burst errors. The main obstacle of long time interleaving for streaming services is the increase in service tune-in time, which significantly decreases the Quality of Experience (QoE) of end users. That is why today's Mobile TV systems are provisioned in a way to minimize the time interleaving length to provide an acceptable tune-in time, though the service robustness would significantly benefit from a longer interleaving length. This paper presents a new way of service provisioning that marries fast zapping and long time interleaving by combining Layer-Aware FEC and layered media codecs with unequal time interleaving and an appropriate transmission scheduling. The effect of the proposed scheme on the QoE as well as the service tune-in time is analyzed. Simulation results within a Gilbert-Elliot channel report the benefit of the proposed scheme, which for the first time enables broadcast services with fast tune-in and at the same time long time interleaving.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132417300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Improving Depth Compression in HEVC by Pre/Post Processing 通过前后处理改善HEVC的深度压缩
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.112
Cuiling Lan, Jizheng Xu, Feng Wu
{"title":"Improving Depth Compression in HEVC by Pre/Post Processing","authors":"Cuiling Lan, Jizheng Xu, Feng Wu","doi":"10.1109/ICMEW.2012.112","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.112","url":null,"abstract":"Depth images have different characteristics from that of color images. They usually have gradual changes within objects while steep changes happen around object boundaries. Compression standards such as H.264/AVC and High Efficiency Video Coding (HEVC) are efficient in dealing with the gradual change regions but usually result in poor performance at edge regions. To facilitate the reuse of the current video coding design and to further improve the depth compression performance, we propose a pre/post processing based compression strategy. By modifying the edge blocks in the depth image to flat blocks, the pre-processed image can be efficiently compressed using existing compression schemes. Meanwhile, those edge blocks are compressed by an edge preserving codec separately. At the decoder, the decoded modified image and the edge blocks will be merged together to form the final reconstructed image. In our simulations, we implement this strategy to HEVC to evaluate the coding performance. Experimental results show the proposed scheme can achieve about 30% - 40% bit savings for Ballet and Break dancers sequences and 60% - 70% bit savings for Kinect captured depth sequences in comparison with HEVC.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131990614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Visualization of Real-World Events with Geotagged Tweet Photos 可视化与地理标记的推特照片真实世界的事件
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.53
Y. Nakaji, Keiji Yanai
{"title":"Visualization of Real-World Events with Geotagged Tweet Photos","authors":"Y. Nakaji, Keiji Yanai","doi":"10.1109/ICMEW.2012.53","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.53","url":null,"abstract":"Recently, microblogs such as Twitter have become very common, which enable people to post and read short messages from anywhere. Since microblogs are different from traditional blogs in terms of being instant and on the spot, they include much more information on various events happened over the world. In addition, some of the messages posted to Twitter include photos and geotags as well as texts. From them, we can get to know what and where happens intuitively. Then, we propose a method to select photos related to the given real-world events from geotagged Twitter messages (tweets) taking advantage of geotags and visual features of photos. We implemented a system which can visualize real-world events on the online map.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124127885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
INSPORAMA: INS-Aided Misalignment Correction in Feature-Based Panoramic Image Stitching INSPORAMA:基于特征的全景图像拼接中的ins辅助错位校正
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.120
Yuan Gao, Chenguang Wang, E. Chang
{"title":"INSPORAMA: INS-Aided Misalignment Correction in Feature-Based Panoramic Image Stitching","authors":"Yuan Gao, Chenguang Wang, E. Chang","doi":"10.1109/ICMEW.2012.120","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.120","url":null,"abstract":"Feature-based image stitching, which aligns images with overlapping fields of view and then stitches them together, is a widely used panorama-construction technology. However, the current scale-, view- and illumination-invariant features can still result in misalignment because of occurrences of congruent or near-congruent features. We propose an INS (inertial navigation system) aided image-alignment method, named INSPORAMA, to reduce such misalignment. INSPORAMA improves image alignment accuracy by reducing both image area and the number of candidate feature-pairs to compare. Based on INSPORAMA, we have built an Android application, which is able to construct panoramic images in near real-time.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121435398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exerlearn Bike: An Exergaming System for Children's Educational and Physical Well-Being 健身自行车:儿童教育和身体健康的健身系统
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.91
Rajwa Alharthi, Ali Karime, Hussein Al Osman, Abdulmotaleb El Saddik
{"title":"Exerlearn Bike: An Exergaming System for Children's Educational and Physical Well-Being","authors":"Rajwa Alharthi, Ali Karime, Hussein Al Osman, Abdulmotaleb El Saddik","doi":"10.1109/ICMEW.2012.91","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.91","url":null,"abstract":"Recently, games that incorporate exertion interfaces have emerged and are gaining attention from both academic researchers and commercial companies. Exergaming refers to video games that promote physical activity through playing. Exergames are believed to be a good method of promoting physical activity in children. Such games encourage children to engage in physical activity while enjoying their gaming experience. Nonetheless, we wanted to investigate whether combining exercising and learning modalities could be more beneficial for children's well-being. In this paper, we present our exergaming system called the ExerLearn Bike System, which combines both physical and educational aspects. The ExerLearn Bike System not only engages children in exercising through playing, but also provides them with learning experiences at the same time. We adopted a modular design approach that makes it possible to use any stationary bicycle as an input interface by attaching a number of devices on the bike.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123580289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Job Shop Scheduling at Your Fingertips Planning Alternatives Off the Cloud 作业车间调度在您的指尖规划备选方案的云
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.40
Christoph Vogler, Hans-Rainer Beick, J. Opfermann, Wolfgang Holzer
{"title":"Job Shop Scheduling at Your Fingertips Planning Alternatives Off the Cloud","authors":"Christoph Vogler, Hans-Rainer Beick, J. Opfermann, Wolfgang Holzer","doi":"10.1109/ICMEW.2012.40","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.40","url":null,"abstract":"The algorithmic generation of detailed manufacturing plans close to an optimal solution is computationally unfeasible due to the enormous size of the space of potential solutions which makes searching a process of exponential time complexity. Human-computer interaction may lead to practical solutions. Conventional approaches rely on appealing visualizations. Modern media technologies allow for directly manipulating alternative solutions and for ad hoc modification on demand. There is a trend away from complex client planning systems toward basic web services delivering solutions off the cloud. Human users master the manifold of solutions by interaction.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121072646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bi-Modal Person Recognition on a Mobile Phone: Using Mobile Phone Data 手机上的双模态人物识别:使用手机数据
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.116
C. McCool, S. Marcel, A. Hadid, M. Pietikäinen, P. Matejka, J. Černocký, N. Poh, J. Kittler, A. Larcher, C. Lévy, D. Matrouf, J. Bonastre, P. Tresadern, Tim Cootes
{"title":"Bi-Modal Person Recognition on a Mobile Phone: Using Mobile Phone Data","authors":"C. McCool, S. Marcel, A. Hadid, M. Pietikäinen, P. Matejka, J. Černocký, N. Poh, J. Kittler, A. Larcher, C. Lévy, D. Matrouf, J. Bonastre, P. Tresadern, Tim Cootes","doi":"10.1109/ICMEW.2012.116","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.116","url":null,"abstract":"This paper presents a novel fully automatic bi-modal, face and speaker, recognition system which runs in real-time on a mobile phone. The implemented system runs in real-time on a Nokia N900 and demonstrates the feasibility of performing both automatic face and speaker recognition on a mobile phone. We evaluate this recognition system on a novel publicly-available mobile phone database and provide a well defined evaluation protocol. This database was captured almost exclusively using mobile phones and aims to improve research into deploying biometric techniques to mobile devices. We show, on this mobile phone database, that face and speaker recognition can be performed in a mobile environment and using score fusion can improve the performance by more than 25% in terms of error rates.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128318504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 234
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信