Proceedings of the 16th International Conference on Multimodal Interaction最新文献

筛选
英文 中文
Session details: Doctoral Spotlight Session 会议详情:博士聚焦会议
M. Cristani
{"title":"Session details: Doctoral Spotlight Session","authors":"M. Cristani","doi":"10.1145/3246748","DOIUrl":"https://doi.org/10.1145/3246748","url":null,"abstract":"","PeriodicalId":389037,"journal":{"name":"Proceedings of the 16th International Conference on Multimodal Interaction","volume":"7 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123446871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Session details: Oral Session 3: Affect and Cognitive Modeling 会话细节:口头会话3:情感和认知建模
S. Oviatt
{"title":"Session details: Oral Session 3: Affect and Cognitive Modeling","authors":"S. Oviatt","doi":"10.1145/3246746","DOIUrl":"https://doi.org/10.1145/3246746","url":null,"abstract":"","PeriodicalId":389037,"journal":{"name":"Proceedings of the 16th International Conference on Multimodal Interaction","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124685298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computational Analysis of Persuasiveness in Social Multimedia: A Novel Dataset and Multimodal Prediction Approach 社交多媒体说服力的计算分析:一种新的数据集和多模态预测方法
Proceedings of the 16th International Conference on Multimodal Interaction Pub Date : 2014-11-12 DOI: 10.1145/2663204.2663260
Sunghyun Park, H. Shim, Moitreya Chatterjee, Kenji Sagae, Louis-Philippe Morency
{"title":"Computational Analysis of Persuasiveness in Social Multimedia: A Novel Dataset and Multimodal Prediction Approach","authors":"Sunghyun Park, H. Shim, Moitreya Chatterjee, Kenji Sagae, Louis-Philippe Morency","doi":"10.1145/2663204.2663260","DOIUrl":"https://doi.org/10.1145/2663204.2663260","url":null,"abstract":"Our lives are heavily influenced by persuasive communication, and it is essential in almost any types of social interactions from business negotiation to conversation with our friends and family. With the rapid growth of social multimedia websites, it is becoming ever more important and useful to understand persuasiveness in the context of social multimedia content online. In this paper, we introduce our newly created multimedia corpus of 1,000 movie review videos obtained from a social multimedia website called ExpoTV.com, which will be made freely available to the research community. Our research results presented here revolve around the following 3 main research hypotheses. Firstly, we show that computational descriptors derived from verbal and nonverbal behavior can be predictive of persuasiveness. We further show that combining descriptors from multiple communication modalities (audio, text and visual) improve the prediction performance compared to using those from single modality alone. Secondly, we investigate if having prior knowledge of a speaker expressing a positive or negative opinion helps better predict the speaker's persuasiveness. Lastly, we show that it is possible to make comparable prediction of persuasiveness by only looking at thin slices (shorter time windows) of a speaker's behavior.","PeriodicalId":389037,"journal":{"name":"Proceedings of the 16th International Conference on Multimodal Interaction","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125137052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 125
SoundFLEX: Designing Audio to Guide Interactions with Shape-Retaining Deformable Interfaces SoundFLEX:设计音频以指导与保持形状的可变形接口的交互
Proceedings of the 16th International Conference on Multimodal Interaction Pub Date : 2014-11-12 DOI: 10.1145/2663204.2663278
Koray Tahiroglu, Thomas Svedström, Valtteri Wikström, S. Overstall, Johan Kildal, T. Ahmaniemi
{"title":"SoundFLEX: Designing Audio to Guide Interactions with Shape-Retaining Deformable Interfaces","authors":"Koray Tahiroglu, Thomas Svedström, Valtteri Wikström, S. Overstall, Johan Kildal, T. Ahmaniemi","doi":"10.1145/2663204.2663278","DOIUrl":"https://doi.org/10.1145/2663204.2663278","url":null,"abstract":"Shape-retaining freely-deformable interfaces can take innumerable distinct shapes, and creating specific target configurations can be a challenge. In this paper, we investigate how audio can guide a user in this process, through the use of either musical or metaphoric sounds. In a formative user study, we found that sound encouraged action possibilities and made the affordances of the interface perceivable. We also found that adding audio as a modality along with vision and touch, made a positive contribution to guiding users' interactions with the interface.","PeriodicalId":389037,"journal":{"name":"Proceedings of the 16th International Conference on Multimodal Interaction","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125741917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Gaze-in 2014: the 7th Workshop on Eye Gaze in Intelligent Human Machine Interaction 注视——2014:第七届人机智能交互中的眼睛注视研讨会
Proceedings of the 16th International Conference on Multimodal Interaction Pub Date : 2014-11-12 DOI: 10.1145/2663204.2668316
Hung-Hsuan Huang, R. Bednarik, Kristiina Jokinen, Y. Nakano
{"title":"Gaze-in 2014: the 7th Workshop on Eye Gaze in Intelligent Human Machine Interaction","authors":"Hung-Hsuan Huang, R. Bednarik, Kristiina Jokinen, Y. Nakano","doi":"10.1145/2663204.2668316","DOIUrl":"https://doi.org/10.1145/2663204.2668316","url":null,"abstract":"This paper presents a summary of the seventh workshop on Eye Gaze in Intelligent Human Machine Interaction. The Gaze-in 2014 workshop is a part of a series of workshops held around the topics related to gaze and multimodal interaction. The workshop web-site can be found at http://hhhuang.homelinux.com/gaze_in/.","PeriodicalId":389037,"journal":{"name":"Proceedings of the 16th International Conference on Multimodal Interaction","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130019537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WebSanyog: A Portable Assistive Web Browser for People with Cerebral Palsy WebSanyog:脑瘫患者的便携式辅助网络浏览器
Proceedings of the 16th International Conference on Multimodal Interaction Pub Date : 2014-11-12 DOI: 10.1145/2663204.2669628
Tirthankar Dasgupta, Manjira Sinha, Gagan Kandra, A. Basu
{"title":"WebSanyog: A Portable Assistive Web Browser for People with Cerebral Palsy","authors":"Tirthankar Dasgupta, Manjira Sinha, Gagan Kandra, A. Basu","doi":"10.1145/2663204.2669628","DOIUrl":"https://doi.org/10.1145/2663204.2669628","url":null,"abstract":"The paper presents design and development of WebSanyog, an Android based web browser that helps people with severe form of spastic cerebral palsy and highly restricted motor movement skills to access web contents. The target user group has acted as our design advisors through constant interaction during the whole process. Features like, auto scanning mechanism, predictive keyboard and intelligent link parser make the system suitable for our target users. The browser is primarily developed for mobile and tablet based devices keeping in mind the portability issue.","PeriodicalId":389037,"journal":{"name":"Proceedings of the 16th International Conference on Multimodal Interaction","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124548810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Additive Value of Multimodal Features for Predicting Engagement, Frustration, and Learning during Tutoring 多模态特征对预测参与、挫折和学习的附加价值
Proceedings of the 16th International Conference on Multimodal Interaction Pub Date : 2014-11-12 DOI: 10.1145/2663204.2663264
Joseph F. Grafsgaard, Joseph B. Wiggins, A. Vail, K. Boyer, E. Wiebe, James C. Lester
{"title":"The Additive Value of Multimodal Features for Predicting Engagement, Frustration, and Learning during Tutoring","authors":"Joseph F. Grafsgaard, Joseph B. Wiggins, A. Vail, K. Boyer, E. Wiebe, James C. Lester","doi":"10.1145/2663204.2663264","DOIUrl":"https://doi.org/10.1145/2663204.2663264","url":null,"abstract":"Detecting learning-centered affective states is difficult, yet crucial for adapting most effectively to users. Within tutoring in particular, the combined context of student task actions and tutorial dialogue shape the student's affective experience. As we move toward detecting affect, we may also supplement the task and dialogue streams with rich sensor data. In a study of introductory computer programming tutoring, human tutors communicated with students through a text-based interface. Automated approaches were leveraged to annotate dialogue, task actions, facial movements, postural positions, and hand-to-face gestures. These dialogue, nonverbal behavior, and task action input streams were then used to predict retrospective student self-reports of engagement and frustration, as well as pretest/posttest learning gains. The results show that the combined set of multimodal features is most predictive, indicating an additive effect. Additionally, the findings demonstrate that the role of nonverbal behavior may depend on the dialogue and task context in which it occurs. This line of research identifies contextual and behavioral cues that may be leveraged in future adaptive multimodal systems.","PeriodicalId":389037,"journal":{"name":"Proceedings of the 16th International Conference on Multimodal Interaction","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131254927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 59
Context-Aware Multimodal Robotic Health Assistant 上下文感知多模式机器人健康助手
Proceedings of the 16th International Conference on Multimodal Interaction Pub Date : 2014-11-12 DOI: 10.1145/2663204.2669627
Vidyavisal Mangipudi, Raj Tumuluri
{"title":"Context-Aware Multimodal Robotic Health Assistant","authors":"Vidyavisal Mangipudi, Raj Tumuluri","doi":"10.1145/2663204.2669627","DOIUrl":"https://doi.org/10.1145/2663204.2669627","url":null,"abstract":"Reduced adherence to medical regimen has led to poorer health, more frequent hospitalization and costs the American economy over $290 Billion annually. EasyHealth Assistant (EHA) is a context aware and interactive robot that helps patients receive their medication in the prescribed dosage at the right time. Additionally, EHA features multimodal elements such as Face Recognition, Speech Recognition + TTS, Motion Sensing and MindWave (EEG) interactions that were developed using W3C MMI Architecture and Markup Languages. EHA improves the Caregiver/ Doctor -- Patient collaboration with tools like Remote control and Video conference. It also provides the Caregivers with real-time statistics and allows easy monitoring of medical adherence and health vitals, which should result in improved outcome for the patient.","PeriodicalId":389037,"journal":{"name":"Proceedings of the 16th International Conference on Multimodal Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130153195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
MAPTRAITS 2014 - The First Audio/Visual Mapping Personality Traits Challenge - An Introduction: Perceived Personality and Social Dimensions MAPTRAITS 2014 -第一个音频/视觉映射人格特征挑战-介绍:感知人格和社会维度
Proceedings of the 16th International Conference on Multimodal Interaction Pub Date : 2014-11-12 DOI: 10.1145/2663204.2668317
Oya Celiktutan, F. Eyben, E. Sariyanidi, H. Gunes, Björn Schuller
{"title":"MAPTRAITS 2014 - The First Audio/Visual Mapping Personality Traits Challenge - An Introduction: Perceived Personality and Social Dimensions","authors":"Oya Celiktutan, F. Eyben, E. Sariyanidi, H. Gunes, Björn Schuller","doi":"10.1145/2663204.2668317","DOIUrl":"https://doi.org/10.1145/2663204.2668317","url":null,"abstract":"The Audio/Visual Mapping Personality Challenge and Workshop (MAPTRAITS) is a competition event that is organised to facilitate the development of signal processing and machine learning techniques for the automatic analysis of personality traits and social dimensions. MAPTRAITS includes two sub-challenges, the continuous space-time sub-challenge and the quantised space-time sub-challenge. The continuous sub-challenge evaluated how systems predict the variation of perceived personality traits and social dimensions in time, whereas the quantised challenge evaluated the ability of systems to predict the overall perceived traits and dimensions in shorter video clips. To analyse the effect of audio and visual modalities on personality perception, we compared systems under three different settings: visual-only, audio-only and audio-visual. With MAPTRAITS we aimed at improving the knowledge on the automatic analysis of personality traits and social dimensions by producing a benchmarking protocol and encouraging the participation of various research groups from different backgrounds.","PeriodicalId":389037,"journal":{"name":"Proceedings of the 16th International Conference on Multimodal Interaction","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130361140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Session details: Poster Session 1 会议详情:海报会议1
O. Aran, Louis-Philippe Morency
{"title":"Session details: Poster Session 1","authors":"O. Aran, Louis-Philippe Morency","doi":"10.1145/3246744","DOIUrl":"https://doi.org/10.1145/3246744","url":null,"abstract":"","PeriodicalId":389037,"journal":{"name":"Proceedings of the 16th International Conference on Multimodal Interaction","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131437758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信