{"title":"Investigating Intrusiveness of Workload Adaptation","authors":"F. Putze, Tanja Schultz","doi":"10.1145/2663204.2663279","DOIUrl":"https://doi.org/10.1145/2663204.2663279","url":null,"abstract":"In this paper, we investigate how an automatic task assistant which can detect and react to a user's workload level is able to support the user in a complex, dynamic task. In a user study, we design a dispatcher scenario with low and high workload conditions and compare the effect of four support strategies with different levels of intrusiveness using objective and subjective metrics. We see that a more intrusive strategy results in higher efficiency and effectiveness, but is also less accepted by the participants. We also show that the benefit of supportive behavior depends on the user's workload level, i.e. adaptation to its changes are necessary. We describe and evaluate a Brain Computer Interface that is able to provide the necessary user state detection.","PeriodicalId":389037,"journal":{"name":"Proceedings of the 16th International Conference on Multimodal Interaction","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124188934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fumio Nihei, Y. Nakano, Yuki Hayashi, Hung-Hsuan Huang, S. Okada
{"title":"Predicting Influential Statements in Group Discussions using Speech and Head Motion Information","authors":"Fumio Nihei, Y. Nakano, Yuki Hayashi, Hung-Hsuan Huang, S. Okada","doi":"10.1145/2663204.2663248","DOIUrl":"https://doi.org/10.1145/2663204.2663248","url":null,"abstract":"Group discussions are used widely when generating new ideas and forming decisions as a group. Therefore, it is assumed that giving social influence to other members through facilitating the discussion is an important part of discussion skill. This study focuses on influential statements that affect discussion flow and highly related to facilitation, and aims to establish a model that predicts influential statements in group discussions. First, we collected a multimodal corpus using different group discussion tasks; in-basket and case-study. Based on schemes for analyzing arguments, each utterance was annotated as being influential or not. Then, we created classification models for predicting influential utterances using prosodic features as well as attention and head motion information from the speaker and other members of the group. In our model evaluation, we discovered that the assessment of each participant in terms of discussion facilitation skills by experienced observers correlated highly to the number of influential utterances by a given participant. This suggests that the proposed model can predict influential statements with considerable accuracy, and the prediction results can be a good predictor of facilitators in group discussions.","PeriodicalId":389037,"journal":{"name":"Proceedings of the 16th International Conference on Multimodal Interaction","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115088505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integrating Remote PPG in Facial Expression Analysis Framework","authors":"H. E. Tasli, Amogh Gudi, M. D. Uyl","doi":"10.1145/2663204.2669626","DOIUrl":"https://doi.org/10.1145/2663204.2669626","url":null,"abstract":"This demonstration paper presents the FaceReader framework where human face image and skin color variations are analyzed for observing facial expressions, vital signs including but not limited to average heart rate (HR), heart rate variability (HRV) and also the stress and confidence levels of the person. Remote monitoring of the facial and vital signs could be useful for wide range of applications. FaceReader uses active appearance modeling for facial analysis and novel signal processing techniques for heart rate and variability estimation. The performance has been objectively evaluated and psychological guidelines for stress measurements are incorporated in the framework for analysis.","PeriodicalId":389037,"journal":{"name":"Proceedings of the 16th International Conference on Multimodal Interaction","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127087780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Affective Analysis of Abstract Paintings Using Statistical Analysis and Art Theory","authors":"A. Sartori","doi":"10.1145/2663204.2666289","DOIUrl":"https://doi.org/10.1145/2663204.2666289","url":null,"abstract":"A novel approach to the emotion classification of abstract paintings is proposed. Based on a user study, we employ computer vision techniques to understand what makes an abstract artwork emotional. Our aim is to identify and quantify which are the emotional regions of abstract paintings, as well as the role of each feature (colour, shapes and texture) on the human emotional response. In addition, we investigate the link between the detected emotional content and the way people look at abstract paintings by using eye-tracking recordings. A bottom-up saliency model was applied to compare with eye-tracking in order to predict the emotional salient regions of abstract paintings. In future, we aim to extract metadata associated to the paintings (e.g., title, keywords, textual description, etc.) in order to correlate it with the emotional responses of the paintings. This research opens opportunity to understand why a specific painting is perceived as emotional on global and local scales.","PeriodicalId":389037,"journal":{"name":"Proceedings of the 16th International Conference on Multimodal Interaction","volume":"223 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130670080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Touching the Void -- Introducing CoST: Corpus of Social Touch","authors":"Merel M. Jung, R. Poppe, M. Poel, D. Heylen","doi":"10.1145/2663204.2663242","DOIUrl":"https://doi.org/10.1145/2663204.2663242","url":null,"abstract":"Touch behavior is of great importance during social interaction. To transfer the tactile modality from interpersonal interaction to other areas such as Human-Robot Interaction (HRI) and remote communication automatic recognition of social touch is necessary. This paper introduces CoST: Corpus of Social Touch, a collection containing 7805 instances of 14 different social touch gestures. The gestures were performed in three variations: gentle, normal and rough, on a sensor grid wrapped around a mannequin arm. Recognition of the rough variations of these 14 gesture classes using Bayesian classifiers and Support Vector Machines (SVMs) resulted in an overall accuracy of 54% and 53%, respectively. Furthermore, this paper provides more insight into the challenges of automatic recognition of social touch gestures, including which gestures can be recognized more easily and which are more difficult to recognize.","PeriodicalId":389037,"journal":{"name":"Proceedings of the 16th International Conference on Multimodal Interaction","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123297503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bursting our Digital Bubbles: Life Beyond the App","authors":"Y. Rogers","doi":"10.1145/2663204.2669984","DOIUrl":"https://doi.org/10.1145/2663204.2669984","url":null,"abstract":"30 years ago, a common caricature of computing was of a frustrated user sat staring at a PC hands hovering over a keyboard and mouse. Nowadays, the picture is very different. The PC has been largely overtaken by the laptop, the smartphone and the tablet; more and more people are using them extensively and everywhere as they go about their working and everyday lives. Instead the caricature has become one of people increasingly living in their own digital bubbles - heads-down glued to a mobile device, pecking and swiping at digital content with one finger. How can designers and researchers break out of this app mindset to exploit the new generation of affordable multimodal technologies, in the form of physical computing, internet of things, and sensor toolkits, to begin creating more diverse heads-up, hands-on, arms-out user experiences? In my talk I will argue for a radical rethink of our relationship with future technologies. One that inspires us, through shared devices, tools and data, to be more creative, playful and thoughtful of each other and our surrounding environments.","PeriodicalId":389037,"journal":{"name":"Proceedings of the 16th International Conference on Multimodal Interaction","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133854915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Synchronising Physiological and Behavioural Sensors in a Driving Simulator","authors":"R. Taib, Benjamin Itzstein, Kun Yu","doi":"10.1145/2663204.2663262","DOIUrl":"https://doi.org/10.1145/2663204.2663262","url":null,"abstract":"Accurate and noise robust multimodal activity and mental state monitoring can be achieved by combining physiological, behavioural and environmental signals. This is especially promising in assistive driving technologies, because vehicles now ship with sensors ranging from wheel and pedal activity, to voice and eye tracking. In practice, however, multimodal user studies are confronted with challenging data collection and synchronisation issues, due to the diversity of sensing, acquisition and storage systems. Referencing current research on cognitive load measurement in a driving simulator, this paper describes the steps we take to consistently collect and synchronise signals, using the Orbit Measurement Library (OML) framework, combined with a multimodal version of a cinema clapperboard. The resulting data is automatically stored in a networked database, in a structured format, including metadata about the data and experiment. Moreover, fine-grained synchronisation between all signals is provided without additional hardware, and clock drift can be corrected post-hoc.","PeriodicalId":389037,"journal":{"name":"Proceedings of the 16th International Conference on Multimodal Interaction","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114265288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Secret Language of Our Body: Affect and Personality Recognition Using Physiological Signals","authors":"Julia Wache","doi":"10.1145/2663204.2666290","DOIUrl":"https://doi.org/10.1145/2663204.2666290","url":null,"abstract":"We present a novel framework for decoding individuals? emotional state and personality traits based on physiological responses to affective movie clips. During watching 36 video clips we used measures of Electrocardiogram (ECG), Galvanic Skin Response (GSR), facial-Electroencephalogram (EEG) and facial emotional responses to decode i) the emotional state of partcipants and ii) their Big Five personality traits extending previous work that had connected either explicit (user ratings) with some implicit (physiological) affective responses or one of them with selected personality traits. We make the first dataset comprising both affective and personality information publicly available for further research and we further explore different methods and implementations for automated emotion and personality detection for future applications.","PeriodicalId":389037,"journal":{"name":"Proceedings of the 16th International Conference on Multimodal Interaction","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114271706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhanced Autocorrelation in Real World Emotion Recognition","authors":"S. Meudt, F. Schwenker","doi":"10.1145/2663204.2666276","DOIUrl":"https://doi.org/10.1145/2663204.2666276","url":null,"abstract":"Multimodal emotion recognition in real world environments is still a challenging task of affective computing research. Recognizing the affective or physiological state of an individual is difficult for humans as well as for computer systems, and thus finding suitable discriminative features is the most promising approach in multimodal emotion recognition. In the literature numerous features have been developed or adapted from related signal processing tasks. But still, classifying emotional states in real world scenarios is difficult and the performance of automatic classifiers is rather limited. This is mainly due to the fact that emotional states can not be distinguished by a well defined set of discriminating features. In this work we present an enhanced autocorrelation feature as a multi pitch detection feature and compare its performance to feature well known, and state-of-the-art in signal and speech processing. Results of the evaluation show that the enhanced autocorrelation outperform other state-of-the-art features in case of the challenge data set. The complexity of this benchmark data set lies in between real world data sets showing naturalistic emotional utterances, and the widely applied and well-understood acted emotional data sets.","PeriodicalId":389037,"journal":{"name":"Proceedings of the 16th International Conference on Multimodal Interaction","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114603467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Speaker- and Corpus-Independent Methods for Affect Classification in Computational Paralinguistics","authors":"Heysem Kaya","doi":"10.1145/2663204.2666284","DOIUrl":"https://doi.org/10.1145/2663204.2666284","url":null,"abstract":"The analysis of spoken emotions is of increasing interest in human computer interaction, in order to drive the machine communication into a humane manner. It has manifold applications ranging from intelligent tutoring systems to affect sensitive robots, from smart call centers to patient telemonitoring. In general the study of computational paralinguistics, which covers the analysis of speaker states and traits, faces with real life challenges of inter-speaker and inter-corpus variability. In this paper, a brief summary of the progress and future directions of my PhD study titled Adaptive Mixture Models for Speech Emotion Recognition that targets these challenges are given. An automatic mixture model selection method for Mixture of Factor Analyzers is proposed for modeling high dimensional data. To provide the mentioned statistical method a compact set of potent features, novel feature selection methods based on Canonical Correlation Analysis are introduced.","PeriodicalId":389037,"journal":{"name":"Proceedings of the 16th International Conference on Multimodal Interaction","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117057134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}