HCMC '15最新文献

筛选
英文 中文
Expressive Multimedia: Bringing Action to Physical World by Dancing-Tablet 富有表现力的多媒体:用舞蹈平板电脑将动作带入物理世界
HCMC '15 Pub Date : 2015-10-30 DOI: 10.1145/2810397.2810399
Muhammad Sikandar Lal Khan, Haibo Li, S. Réhman
{"title":"Expressive Multimedia: Bringing Action to Physical World by Dancing-Tablet","authors":"Muhammad Sikandar Lal Khan, Haibo Li, S. Réhman","doi":"10.1145/2810397.2810399","DOIUrl":"https://doi.org/10.1145/2810397.2810399","url":null,"abstract":"The design practice based on embodied interaction concept focuses on developing new user interfaces for computer devices that merge the digital content with the physical world. In this work we have proposed a novel embodied interaction based design in which the 'action' information of the digital content is presented in the physical world. More specifically, we have mapped the 'action' information of the video content from the digital world into the physical world. The motivating example presented in this paper is our novel dancing-tablet, in which a tablet-PC dances on the rhythm of the song, hence the 'action' information is not just confined into a 2D flat display but also expressed by it. This paper presents i) hardware design of our mechatronic dancing-tablet platform, ii) software algorithm for musical feature extraction and iii) embodied computational model for mapping 'action' information of the musical expression to the mechatronic platform. Our user study shows that the overall perception of audio-video music is enhanced by our dancing-tablet setup.","PeriodicalId":253945,"journal":{"name":"HCMC '15","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123915102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Event Detection and Highlight Detection of Broadcasted Game Videos 游戏视频广播的事件检测与高亮检测
HCMC '15 Pub Date : 2015-10-30 DOI: 10.1145/2810397.2810398
W. Chu, Yung-Chieh Chou
{"title":"Event Detection and Highlight Detection of Broadcasted Game Videos","authors":"W. Chu, Yung-Chieh Chou","doi":"10.1145/2810397.2810398","DOIUrl":"https://doi.org/10.1145/2810397.2810398","url":null,"abstract":"Efficient access of game videos is urgently demanded due to the emergence of live streaming platforms and the explosive numbers of gamers and viewers. In this work we facilitate efficient access from two aspects: game event detection and highlight detection. By recognizing predefined text displayed on screen when some events occur, we associate game events with time stamps to facilitate direct access. We jointly consider visual features, events, and viewer's reaction to construct two highlight models, and enable compact game presentation. Experimental results show the effectiveness of the proposed methods. As one of the early attempts on analyzing broadcasted game videos from the perspective of multimedia content analysis, our contributions are twofold. First, we design and extract game-specific features considering visual content, event semantics, and viewer's reaction. Second, we integrate clues from these three domains based on a psychological approach and a data-driven approach to characterize game highlights.","PeriodicalId":253945,"journal":{"name":"HCMC '15","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124800172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Multimedia Fatigue Detection for Adaptive Infotainment User Interface 自适应信息娱乐用户界面多媒体疲劳检测
HCMC '15 Pub Date : 2015-10-30 DOI: 10.1145/2810397.2810400
Sultan Alhazmi, M. Saini, Abdulmotaleb El Saddik
{"title":"Multimedia Fatigue Detection for Adaptive Infotainment User Interface","authors":"Sultan Alhazmi, M. Saini, Abdulmotaleb El Saddik","doi":"10.1145/2810397.2810400","DOIUrl":"https://doi.org/10.1145/2810397.2810400","url":null,"abstract":"Current vehicles are equipped with user interfaces that assist the drivers by presenting essential information such as navigation, speed limit, etc. In this work we present a fatigue detection model in order to build an adaptive user interface for vehicles that changes its properties according to the fatigue level of the driver. When the driver is fatigued, the interface parameters, such as intensity and color combination, are modulated to make the user interface more attentive and intrusive. At other times, when the driver is not fatigued, the interface properties are optimized for aesthetics and pleasure. In this way, the adaptive interface provides a warning to the driver while she is fatigued along with the routine essential information. We take a multimedia approach to measure fatigue by analysing the driver behaviour. The system captures driver behaviour with four media streams that capture: angular velocity of steering wheel, force on brake pedal, force on gas pedal, and grip force. These continuous media stream are fused together with other contextual parameters to detect fatigue. In the experiments, we evaluate two fusion techniques and 16 media stream combinations. It is found that the fatigue detection accuracy increases almost linearly with number of media streams fused. We also found that the steering wheel provides best cue of fatigue, while the gas pedal provides weakest cue. Personalized Bayesian Networks further enhance the accuracy.","PeriodicalId":253945,"journal":{"name":"HCMC '15","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127044043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Highly Accurate and Fully Automatic Head Pose Estimation from a Low Quality Consumer-Level RGB-D Sensor 基于低质量消费级RGB-D传感器的高精度全自动头部姿态估计
HCMC '15 Pub Date : 2015-10-30 DOI: 10.1145/2810397.2810401
R. S. Ghiass, Ognjen Arandjelovic, D. Laurendeau
{"title":"Highly Accurate and Fully Automatic Head Pose Estimation from a Low Quality Consumer-Level RGB-D Sensor","authors":"R. S. Ghiass, Ognjen Arandjelovic, D. Laurendeau","doi":"10.1145/2810397.2810401","DOIUrl":"https://doi.org/10.1145/2810397.2810401","url":null,"abstract":"In this paper we describe a novel algorithm for head pose estimation from low-quality RGB-D data acquired using a consumer-level device such as Microsoft Kinect. We focus our attention on the well-known challenges in the processing of depth point-clouds which include spurious data, noise, and missing data caused by occlusion. Our algorithm performs pose estimation by fitting a 3D morphable model which explicitly includes pose parameters. Several important novelties are described. (i) We propose a method for automatic removal of the majority of spurious depth data which uses facial feature detection in the associated RGB image. By back-projecting the corresponding image loci and intersecting them with the 3D point-cloud we construct the facial features plane used to crop the point-cloud. (ii) Both high convergence speed and high fitting accuracy are achieved by formulating the fitting objective function to include both point-to-point and point-to-plane point-cloud matching terms. (iii) The effect of misleading point-cloud matches caused by noisy or missing data is reduced by using the Tukey biweight function as a robust statistic and by employing a re-weighting scheme for different terms in the fitting objective function. (iv) Lastly, the proposed algorithm is evaluated on the standard benchmark Biwi Kinect Head Pose Database on which it is shown to outperform substantially the current state-of-the-art, achieving more than a 20-fold reduction in error estimates of all three Euler angles i.e. yaw, pitch, and roll. A thorough analysis of the results is used both to gain full insight into the behaviour of the described algorithm as well as to highlight important methodological issues which future authors should consider in the evaluation of pose estimation algorithms.","PeriodicalId":253945,"journal":{"name":"HCMC '15","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115447542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信