RFMIR '14Pub Date : 2014-11-16DOI: 10.1145/2666253.2666258
Z. Hammal, J. Cohn
{"title":"Intra- and Interpersonal Functions of Head Motion in Emotion Communication","authors":"Z. Hammal, J. Cohn","doi":"10.1145/2666253.2666258","DOIUrl":"https://doi.org/10.1145/2666253.2666258","url":null,"abstract":"In affective computing, head motion too often has been considered only a nuisance variable, something to control when aligning face images for analysis of facial expression or identity. Yet, recent research suggests that head motion is critical to the communication of emotion and the regulation of face-to-face interaction. Using a generic head tracker, strong relationships between head motion and emotion at both the individual and dyadic levels were found in studies of distressed couples and mother-infant dyads. These findings raise key research questions about head motion and other nonverbal displays: How extensively do they communicate emotion and psychological distress or disorder? What is their temporal coordination with facial expression? How are they coordinated interpersonally? Windowed cross-correlation and actor-partner analysis are proposed for the latter.","PeriodicalId":254468,"journal":{"name":"RFMIR '14","volume":"6 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117338737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
RFMIR '14Pub Date : 2014-11-16DOI: 10.1145/2666253.2666260
M. Valstar
{"title":"Automatic Behaviour Understanding in Medicine","authors":"M. Valstar","doi":"10.1145/2666253.2666260","DOIUrl":"https://doi.org/10.1145/2666253.2666260","url":null,"abstract":"Now that Affective Computing and Social Signal Processing methods are becoming increasingly robust and accurate, novel areas of applications with significant societal impact are opening up for exploration. Perhaps one of the most promising areas is the application of automatic expressive behaviour understanding to help diagnose, monitor, and treat medical conditions that themselves alter a person's social and affective signals. This work argues that this is now essentially a new area of research, called behaviomedics. It gives a definition of the area, discusses the most important groups of medical conditions that could benefit from this, and makes suggestions for future directions.","PeriodicalId":254468,"journal":{"name":"RFMIR '14","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126061952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
RFMIR '14Pub Date : 2014-11-16DOI: 10.1145/2666253.2666256
Francesca Bonin, E. Gilmartin, Carl Vogel, N. Campbell
{"title":"Topics for the Future: Genre Differentiation, Annotation, and Linguistic Content Integration in Interaction Analysis","authors":"Francesca Bonin, E. Gilmartin, Carl Vogel, N. Campbell","doi":"10.1145/2666253.2666256","DOIUrl":"https://doi.org/10.1145/2666253.2666256","url":null,"abstract":"In this paper we discuss three topics central to discussion of the future of multimodal research -- genre differentiation, stardardization of annotation, and integration of social and verbal context.","PeriodicalId":254468,"journal":{"name":"RFMIR '14","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125209764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
RFMIR '14Pub Date : 2014-11-16DOI: 10.1145/2666253.2666261
A. A. Salah
{"title":"Natural Multimodal Interaction with a Social Robot: What are the Premises?","authors":"A. A. Salah","doi":"10.1145/2666253.2666261","DOIUrl":"https://doi.org/10.1145/2666253.2666261","url":null,"abstract":"Social robotics seeks to impart to robots the capability of analyzing and synthesizing appropriate social and affective signals that will allow them to function in a social context as seamlessly as possible. Recent efforts in social robotics looks at the development of social skills in humans and non-human animals, resulting in ambitious long-term goals that require great progress in multimodal interaction research as a necessary step. In this paper, we critically discuss the grand challenge of social interaction in robotics and some of its premises.","PeriodicalId":254468,"journal":{"name":"RFMIR '14","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132702954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
RFMIR '14Pub Date : 2014-11-16DOI: 10.1145/2666253.2666254
M. Cristani, R. Ferrario
{"title":"Statistical Pattern Recognition Meets Formal Ontologies: Towards a Semantic Visual Understanding","authors":"M. Cristani, R. Ferrario","doi":"10.1145/2666253.2666254","DOIUrl":"https://doi.org/10.1145/2666253.2666254","url":null,"abstract":"In this paper we propose a series of novel lines of research emerging from the encounter of computer vision and formal ontologies. The main underlying idea is that a genuine visual knowledge can only emerge by the integration of the algorithmic and the semantic layer, which are typical of the two disciplines. We also present some early attempts at applying the proposed hybrid approach to tackle some important and still unsolved issues.","PeriodicalId":254468,"journal":{"name":"RFMIR '14","volume":"40 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132153679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
RFMIR '14Pub Date : 2014-11-16DOI: 10.1145/2666253.2666255
Rene Kaiser, Ferdinand Fuhrmann
{"title":"Multimodal Interaction for Future Control Centers: Interaction Concept and Implementation","authors":"Rene Kaiser, Ferdinand Fuhrmann","doi":"10.1145/2666253.2666255","DOIUrl":"https://doi.org/10.1145/2666253.2666255","url":null,"abstract":"We present an innovative multi-modal interaction concept based on a human-centered design for control centers. The applied multi-layered hardware and software architecture directly supports the users in performing their lengthy monitoring and urgent alarm handling tasks. We combine visual cues, gestural interaction, audio information, and intelligent data processing into a single, universal interface. We further realized the presented concept by a prototypical implementation, using state-of-the-art interaction technologies. Moreover, the paper critically reflects on the long-term applicability of the proposed interfaces and outlines immediate plans for their evaluation. Finally, we indicate several research challenges regarding the real-world application of the presented interaction concepts.","PeriodicalId":254468,"journal":{"name":"RFMIR '14","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125774697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
RFMIR '14Pub Date : 2014-11-16DOI: 10.1145/2666253.2666259
Maria Koutsombogera, Haris Papageorgiou
{"title":"Multimodal Analytics and its Data Ecosystem","authors":"Maria Koutsombogera, Haris Papageorgiou","doi":"10.1145/2666253.2666259","DOIUrl":"https://doi.org/10.1145/2666253.2666259","url":null,"abstract":"Nowadays, flexible capturing techniques allow the creation of datasets rich with information on multimodal conversational behavior. Such data consist in non-verbal and conversational features playing an important role in the interaction strategies and are used for the development of interaction models. In this paper we will focus on the representation of the interactional behavior from the multimodal conversation analysis perspective while we'll discuss the need of a data ecosystem to support the development of applications in the field of multimodal interaction.","PeriodicalId":254468,"journal":{"name":"RFMIR '14","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127942131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
RFMIR '14Pub Date : 2014-11-16DOI: 10.1145/2666253.2666264
A. Potamianos
{"title":"Cognitive Multimodal Processing: from Signal to Behavior","authors":"A. Potamianos","doi":"10.1145/2666253.2666264","DOIUrl":"https://doi.org/10.1145/2666253.2666264","url":null,"abstract":"Affective computing, social and behavioral signal processing are emerging research disciplines that attempt to automatically label the emotional, social and cognitive state of humans using features extracted from audio-visual streams. I argue that this monumental task cannot succeed unless the particularities of the human cognitive processing are incorporated into our models, especially given that often the quantities we are called to model are either biased cognitive abstractions of the real world or altogether fictional creations of our cognition. A variety of cognitive processes that make computational modeling especially challenging are outlined herein, notably: 1) (joint) attention and saliency, 2) common ground, conceptual semantic spaces and representation learning, 3) fusion across time, modalities and cognitive representation layers, and 4) dual-system processing (system one vs. system two) and cognitive decision non-linearities. The grand challenges are outlined and examples are given illustrating how to design models that are both high-performing and respect basic cognitive organization principles. It is shown that such models can achieve good generalization and representation power, as well as model cognitive biases, a prerequisite for modeling and predicting human behavior.","PeriodicalId":254468,"journal":{"name":"RFMIR '14","volume":"350 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132418492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
RFMIR '14Pub Date : 2014-11-16DOI: 10.1145/2666253.2666266
G. Riccardi
{"title":"Towards healthcare personal agents","authors":"G. Riccardi","doi":"10.1145/2666253.2666266","DOIUrl":"https://doi.org/10.1145/2666253.2666266","url":null,"abstract":"For a long time, the research on human-machine conversation and interaction has inspired futuristic visions created by film directors and science fiction writers. Nowadays, there has been great progress towards this end by the extended community of artificial intelligence scientists spanning from computer scientists to neuroscientists. In this paper we first review the tension between the latest advances in the technology of virtual agents and the limitations in the modality, complexity and sociability of conversational agent interaction. Then we identify a research challenge and target for the research and technology community. We need to create a vision and research path to create personal agents that are perceived as devoted assistants and counselors in helping end-users managing their own healthcare and well-being throughout their life. Such target is a high-payoff research agenda with high-impact on the society. In this position paper, following a review of the state-of-the-art in conversational agent technology, we discuss the challenges in spoken/multimodal/multi-sensorial interaction needed to support the development of Healthcare Personal Agents.","PeriodicalId":254468,"journal":{"name":"RFMIR '14","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122719965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
RFMIR '14Pub Date : 2014-11-16DOI: 10.1145/2666253.2666263
M. Chetouani
{"title":"Role of Inter-Personal Synchrony in Extracting Social Signatures: Some Case Studies","authors":"M. Chetouani","doi":"10.1145/2666253.2666263","DOIUrl":"https://doi.org/10.1145/2666253.2666263","url":null,"abstract":"Originally studied by developmental psychologists, synchrony has now captured the interest of researchers in such fields as social signal processing (SSP). Developing metrics that can capture dynamics of social interaction is of particular interest, called here Social Signatures (behavioral characteristics). We describe several case studies for extracting robust and distinctive social signatures : early parent-infant interaction, new framework by exploring hormone changes as ground-truth labeling of quality of interaction and a robot learning approach based on neural networks is able to capture distinctive social signatures between clinical groups. Finally, we discuss future perspectives of learning and characterizing social signatures.","PeriodicalId":254468,"journal":{"name":"RFMIR '14","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130391337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}