2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)最新文献

筛选
英文 中文
Multi-core based HEVC hardware decoding system 基于多核HEVC硬件解码系统
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-07-14 DOI: 10.1109/ICMEW.2014.6890626
Hyunmi Kim, Seunghyun Cho, Kyungjin Byun, N. Eum
{"title":"Multi-core based HEVC hardware decoding system","authors":"Hyunmi Kim, Seunghyun Cho, Kyungjin Byun, N. Eum","doi":"10.1109/ICMEW.2014.6890626","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890626","url":null,"abstract":"In this demo, a scalable HEVC hardware decoder is demonstrated for various applications including UHD. The architecture includes a control logic for multi-core management and flexible in-loop filters that can process boundaries of picture partitions without a separate in-loop filter unit from the pipeline. Two-level parallel processing approach makes the decoder operate in real-time for high-performance applications. The demonstration on FPGA prototype board shows the efficiency of the proposed scalable architecture achieved by multi-core design. The system is estimated to be able to decode UHD video coded by HEVC in real-time.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125388179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Equity crowdfunding -A finnish case study 股权众筹——芬兰案例研究
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-07-14 DOI: 10.1109/ICMEW.2014.6890687
L. Lasrado, A. Lugmayr
{"title":"Equity crowdfunding -A finnish case study","authors":"L. Lasrado, A. Lugmayr","doi":"10.1109/ICMEW.2014.6890687","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890687","url":null,"abstract":"Crowdfunding grows at rapid pace internationally, and global platforms like Kickstarter and Indegogo are main leaders of this trend. Within the scope of this paper, we investigate the case of crowdfunding in Finland and identify particular national needs and potential what this new phenomenon has. The phenomenon of the Crowdfunding has been growing at a rapid pace in Finland, with it being seen as fundraising and effective marketing tool. Especially start-ups have started using the power of the crowd to promote their ideas and products as well as to raise funding. They are very opened to use Crowdfunding as an additional source of finance during their first round of financing. The effect of global platforms like Kickstarter and Indegogo, and in addition successful national ventures like Timo Vuorensola's campaigns for the new movie Iron Sky popularised Crowdfunding in Finland demonstrates the power of this instrument. In particular the younger audience is opened for this new tool. Within the scope of this paper we analyse the power of Crowdfunding in the Finnish context based on a literature study, and analysis of data gathered from Crowdfunding platforms.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125830285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Gesture viewport: Interacting with media content using finger gestures on any surface 手势视窗:在任何表面上使用手指手势与媒体内容交互
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-07-14 DOI: 10.1109/ICMEW.2014.6890618
Hao Tang, Patrick Chiu, Qiong Liu
{"title":"Gesture viewport: Interacting with media content using finger gestures on any surface","authors":"Hao Tang, Patrick Chiu, Qiong Liu","doi":"10.1109/ICMEW.2014.6890618","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890618","url":null,"abstract":"In this paper, we describe Gesture Viewport, a projector-camera system that enables finger gesture interactions with media content on any surface. We propose a novel and computationally efficient finger localization method based on the detection of occlusion patterns inside a virtual sensor grid rendered in a layer on top of a viewport widget. We develop several robust interaction techniques to prevent unintentional gestures from occurring, to provide visual feedback to a user, and to minimize the interference of the sensor grid with the media content. We show the effectiveness of the system through three scenarios: viewing photos, navigating Google Maps, and controlling Google Street View.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125942865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
On the backward refinement in video coding 视频编码中的后向细化
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-07-14 DOI: 10.1109/ICMEW.2014.6890534
Xin Guo, Jia Wang
{"title":"On the backward refinement in video coding","authors":"Xin Guo, Jia Wang","doi":"10.1109/ICMEW.2014.6890534","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890534","url":null,"abstract":"Nowadays, most of the video coding standards are based on the hybrid coding framework, whose information theoretic model was studied by Viswanathan and Berger more than ten years ago. The theoretic model, now known as the sequential coding model, abstracts consecutive frames as correlated random variables. Even the most efficient High Efficiency Video Coding (HEVC) standard can be roughly described by this sequential coding model. Searching for methods and algorithms which can improve the coding efficiency of HEVC is an attractive yet formidable task. In this paper, we propose a sequential coding model with backward refinement (BR). In this modified sequential coding model, after each frame is reconstructed, it can be backward refined by the frames encoded afterwards. Note that the proposed BR model does not depend on the coding tools and thus can be implemented on any current video coding standard. So, using this BR model, we can further improve the coding efficiency of any video coding standard, with only slight modification of the coding structure. Theoretical analysis is given on the performance of the BR model. We also propose a BR algorithm for the practical purpose, which is implemented on the HEVC standard. Experiment results show a maximum of 2.2% BD-rate saving compared with original HEVC codec.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129365456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Autoensum: Automated enhanced summary for multiple interacting objects Autoensum:多个交互对象的自动增强摘要
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-07-14 DOI: 10.1109/ICMEW.2014.6890531
S. S. Thomas, Sumana Gupta, K. Venkatesh
{"title":"Autoensum: Automated enhanced summary for multiple interacting objects","authors":"S. S. Thomas, Sumana Gupta, K. Venkatesh","doi":"10.1109/ICMEW.2014.6890531","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890531","url":null,"abstract":"Video Summarization is a promising approach towards concatenating the moving patterns of objects into a single image. The summary attracts readers because of less browsing time, minimized spatio-temporal redundancy, and a feel of motion activity of the scene. It becomes crucial when video uncovers multiple interacting objects and the quality of the video summary in the form of resolution deteriorates in this situation. This paper attempts to address these type of concerns and presents an approach that helps the viewer to have a more automated super resolved summary of the general content of the video. We propose a method, that provides fully automated reference frame selection, frame removal, super resolution and denoising for a productive video summary involving multiple interacting objects. We have evaluated our approach on different types of videos for the purpose of their quantitative and qualitative comparison.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116362920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automatic detection of temporal synchronization mismatches between the stereoscopic channels for stereo 3D videos 立体3D视频中立体通道间时间同步不匹配的自动检测
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-07-14 DOI: 10.1109/ICMEW.2014.6890701
Mohan Liu, P. Ndjiki-Nya
{"title":"Automatic detection of temporal synchronization mismatches between the stereoscopic channels for stereo 3D videos","authors":"Mohan Liu, P. Ndjiki-Nya","doi":"10.1109/ICMEW.2014.6890701","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890701","url":null,"abstract":"This paper presents an efficient approach to estimate temporal synchronization distortions in 3D sequences, based on measuring motion consistencies of object feature points in depth planes. Errors in post-processing steps or exposure setups of stereo cameras can cause temporal asynchronization between stereoscopic channels. This kind of 3D artifacts lead to poor 3D experience for viewers. Due to the fact that the synchronization mismatch between stereoscopic views is normally slight, the perceptibility of temporal synchronization mismatches is first analyzed since distortion is sometimes not noticeable. Experimentally, this framework introduces 1, 2 or 3 frames of slight temporal deviations to simulate the synchronization mismatch between stereoscopic channels. The experimental results show that the proposed framework has a high accuracy for detecting the synchronization mismatch of a given 3D sequence at the frame and the shot levels.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127197284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Integrating Bayesian Classifier into Random Walk optimizer for interactive image segmentation on mobile phones 将贝叶斯分类器集成到随机漫步优化器中用于手机交互式图像分割
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-07-14 DOI: 10.1109/ICMEW.2014.6890530
Yan Gao, Xiabi Liu
{"title":"Integrating Bayesian Classifier into Random Walk optimizer for interactive image segmentation on mobile phones","authors":"Yan Gao, Xiabi Liu","doi":"10.1109/ICMEW.2014.6890530","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890530","url":null,"abstract":"With rapid development of mobile technology and digital image processing, mobile applications involving image segmentation are emerging in many fields. In this paper, we propose an effective and easy-to-use algorithm for interactive image segmentation (IIS) on mobile phones through integrating a Bayesian Classifier into the Random Walk optimizer. We exploit the user input information to train a Bayesian classifier for determining the posterior probabilities of the image pixels belonging to foreground or background. These probabilities are used to calculate the edge weights and label the seed pixels in the random walk optimizer for image segmentation. The resultant method is called BCRW for short. In this way we improve the segmentation accuracy and alleviate the user burden. The user is only required to draw a rectangle bounding the interested object to get the high-quality segmentation result. We further design an efficient mobile phone version of our BCRW algorithm. The effectiveness and efficiency of the proposed algorithm is confirmed by the comparative experimental results with the Grab Cut and the PIBS algorithms, as well as the experimental results on a mobile phone.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125496576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An ontological bagging approach for image classification of crowdsourced data 面向众包数据图像分类的本体装袋方法
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-07-14 DOI: 10.1109/ICMEW.2014.6890588
N. Xu, Jiangping Wang, Zhaowen Wang, Thomas S. Huang
{"title":"An ontological bagging approach for image classification of crowdsourced data","authors":"N. Xu, Jiangping Wang, Zhaowen Wang, Thomas S. Huang","doi":"10.1109/ICMEW.2014.6890588","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890588","url":null,"abstract":"In this paper, we study how to use semantic relationships for image classification in order to improve the classification accuracy. We achieve the goal by imitating the human visual system which classifies categories from coarse to fine grains based on different visual features. We propose an ontological bagging algorithm where most discriminative weak attributes are automatically learned for different semantic levels by multiple instance learning and the bagging idea is applied to reduce the error propagations of hierarchical classifiers. We also leverage ontological knowledge to augment crowdsourcing annotations (e.g., a hatchback is also a vehicle) in order to train hierarchical classifiers. Our method is tested on a vehicle dataset from the popular crowdsourcing dataset ImageNet. Experimental results show that our method not only achieves state-of-the-art results but also identifies semantically meaningful visual features.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126494097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Selection and evolution in narrative ecosystems. A theoretical framework for narrative prediction 叙事生态系统的选择和进化。叙事预测的理论框架
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-07-14 DOI: 10.1109/ICMEW.2014.6890658
G. Pescatore, Veronica Innocenti, Paola Brembilla
{"title":"Selection and evolution in narrative ecosystems. A theoretical framework for narrative prediction","authors":"G. Pescatore, Veronica Innocenti, Paola Brembilla","doi":"10.1109/ICMEW.2014.6890658","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890658","url":null,"abstract":"The aim of this paper is to investigate audiovisual vast narratives according to a new theoretical perspective named narrative ecosystem, a paradigm that encompasses cross-disciplinary perspective on TV series studies. The narrative ecosystem model is a good response to the need for a dynamic model to represent vast narratives, accounting for the interactions of agents, changes and evolutions. What is still lacking, though, is a computational method to make forecasts in the field of TV serial narratives. Through this analysis we will present some theoretical basis drawing on ecological selection and evolution patterns that might be helpful in building a computational method of narrative prediction in our field of interest.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132398628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Supporting binocular visual quality prediction using machine learning 使用机器学习支持双目视觉质量预测
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-07-14 DOI: 10.1109/ICMEW.2014.6890547
Shanshan Wang, F. Shao, G. Jiang
{"title":"Supporting binocular visual quality prediction using machine learning","authors":"Shanshan Wang, F. Shao, G. Jiang","doi":"10.1109/ICMEW.2014.6890547","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890547","url":null,"abstract":"We present a binocular visual quality prediction model using machine learning (ML). The model includes two steps: training and test phases. To be more specific, we first construct the feature vector from binocular energy response of stereoscopic images with different stimuli of orientations, spatial frequencies and phase shifts, and then use ML to handle the actual mapping of the feature vector into quality scores in training procedure. Finally, quality score is predicted by multiple iterations in test procedure. Experimental results on three publicly available 3D image quality assessment databases demonstrate that, in comparison with the most related existing methods, the proposed technique achieves comparatively consistent performance with subjective assessment.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117046255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信