2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)最新文献

筛选
英文 中文
Mitigating the Bias in Empathy Detection 减轻共情检测中的偏见
Saurabh Hinduja
{"title":"Mitigating the Bias in Empathy Detection","authors":"Saurabh Hinduja","doi":"10.1109/ACIIW.2019.8925035","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925035","url":null,"abstract":"Different levels of human emotion detection and analysis is the basis of affective computing. To facilitate the research there has been extensive work in collecting and analysing huge amount of data. Many times, when collecting data, bias can be present (e.g, gender, ethnicity, age). Considering this my research will identify the factors on which bias is based and develop approaches and datasets to mitigate some of this bias. I am currently investigating the influence of action units on emotions and how they can be used for accurately recognising emotions and vice versa. In this paper, I detail why understanding biases is important, my overall research vision, a plan for my research moving forward, and I conclude with future work, challenges, and potential solutions to solve them.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115161857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Prediction of Psychological Flexibility with multi-scale Heart Rate Variability and Breathing Features in an “in-the-wild” Setting 预测心理灵活性与多尺度心率变异性和呼吸特征在“野外”设置
Abhishek Tiwari, J. Villatte, Shrikanth S. Narayanan, T. Falk
{"title":"Prediction of Psychological Flexibility with multi-scale Heart Rate Variability and Breathing Features in an “in-the-wild” Setting","authors":"Abhishek Tiwari, J. Villatte, Shrikanth S. Narayanan, T. Falk","doi":"10.1109/ACIIW.2019.8925281","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925281","url":null,"abstract":"Psychological flexibility (PF) has recently emerged as an important determinant in pain related outcomes. It is related to pain adaptation, social functioning and emotional well-being. A recent study indicates PF being a significant predictor of heart rate variability (HRV) and mediating relationship between HRV and pain interference. In recent years, HRV has been studied using non-linear dynamics approaches which better quantify the fractal behavior of the inter-beat interval time series. In this study, we propose the use of multi-scale HRV features for predicting PF. The new features are tested on a dataset collected from 200 hospital workers (nurses and staff) during their normal work shifts. We show that fusion of breathing signal features further improves the performance showing the complementarity of two feature sets. We achieve an overall improvement of 4.54% F1-score over benchmark HRV features. These results indicate the importance of non-linear features for PF measurement. An accurate measurement of PF can help in developing pain and distress intervention methods by unobtrusive measurement of physiological signals using wearable sensors in real life conditions.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115271746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Social and Emotion AI: The Potential for Industry Impact 社交和情感人工智能:潜在的行业影响
O. Perepelkina, A. Vinciarelli
{"title":"Social and Emotion AI: The Potential for Industry Impact","authors":"O. Perepelkina, A. Vinciarelli","doi":"10.1109/ACIIW.2019.8925051","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925051","url":null,"abstract":"The goal of this paper is to provide an account of the current progress of Social and Emotion AI, from their earliest pioneering stages to the maturity necessary to attract industrial interest. After defining the scope of these domains and showing that they overlap to a substantial extent with pre-existing computing areas (e.g., Social Signal Processing and Affective Computing), the article tries to measure the industrial interest they attract through several publicly available market forecasts. These show that the volume of the market in Social and Emotion AI is expected to grow considerably in the next years and, hence, the companies active in the field are likely to attract increasingly more attention and investment. Finally, the paper shows that the very appearance of products driven by Social and Emotion AI might be a likely reason for the extensive current debate on AI and ethics. Besides ensuring that the development of AI follows a path that ensures the greatest possible advantage for society, such a debate might actually define the next research avenues in the field.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121095442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cultural Anthropology for Social Emotion Modeling: Principles of Application toward Diversified Social Signal Processing 社会情感建模的文化人类学:多元化社会信号处理的应用原则
Daniel White, H. Katsuno
{"title":"Cultural Anthropology for Social Emotion Modeling: Principles of Application toward Diversified Social Signal Processing","authors":"Daniel White, H. Katsuno","doi":"10.1109/ACIIW.2019.8925172","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925172","url":null,"abstract":"The practice of modeling social emotions has benefited from interdisciplinary engagements with other fields in the hard and human sciences; however, perspectives from cultural and social anthropology have been limited. This has at times resulted in the integration of emotion theories into emotion modeling that emphasize the universal communicability of social signals of emotion at the expense of accounting for cultural diversity evidenced in the ethnographic record. This paper outlines methods and findings of a collaborative effort between cultural anthropologists and engineers to create platforms for interdisciplinary communication and emotion modeling practices more sensitive to cultural diversity and better protected from risks of ethnic, racial, and ethnocentric bias. The paper presents five principles for applying anthropological perspectives to emotion modeling and ultimately argues for a consideration of design strategies for social signal processing based on recent ethnographic evidence of evolving human-robot relationships in Japan.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125019515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Emotion Recognition in Dementia: Advancing technology for multimodal analysis of emotion expression in everyday life 痴呆症的情绪识别:日常生活中情绪表达的多模态分析的先进技术
Deniece S. Nazareth
{"title":"Emotion Recognition in Dementia: Advancing technology for multimodal analysis of emotion expression in everyday life","authors":"Deniece S. Nazareth","doi":"10.1109/ACIIW.2019.8925059","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925059","url":null,"abstract":"This paper provides an overview of my PhD project that focuses on recognizing emotions in dementia by analyzing multi-modal expressions in autobiographical memories of older adults with dementia. The project aims for a better understanding how dementia influences emotional expressions and how dementia differs from the normal aging process. For this reason, spontaneous emotions will be elicited in autobiographical memories in two groups of older adults, one with dementia the other without, for comparison. Audio, video and physiological data will be collected at their home resulting in real-life environments. The emotional expressions can then be analyzed by extracting verbal, non-verbal, facial and gestural features from the audio, video and physiological data collected. In addition, a longitudinal study will be conducted with the older adults with dementia to investigate the longitudinal effect of dementia on emotions. A database of the emotional memories of these vulnerable groups will then be developed to contribute to the advancement of technologies for (automatic) multi-modal emotion recognition. The database will then be made available for the research community. Lastly, we will also develop visualization and statistical models to assess multi-modal patterns of emotion expression in these groups.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123019320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Your Body Reveals Your Impressions about Others: A Study on Multimodal Impression Detection 你的身体揭示了你对他人的印象:一项多模态印象检测的研究
Chen Wang, T. Pun, G. Chanel
{"title":"Your Body Reveals Your Impressions about Others: A Study on Multimodal Impression Detection","authors":"Chen Wang, T. Pun, G. Chanel","doi":"10.1109/ACIIW.2019.8925052","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925052","url":null,"abstract":"Formed impressions are crucial for human-human interaction (e.g. a job interview) and an interaction with a virtual agent/robot, since they can impact people's perceptions and willingness to be involved in the interaction. There are studies on how facial features (e.g. skin color, face shape), acoustic signals and non-verbal behaviors (e.g. gestures, postures) create/leave certain impressions. However there is little research focusing on how our bodies disclose our already formed impression of someone. Forming an impression leads to emotions and behaviors which can be measured. In this paper, we investigate recognition of evoked impression of warmth and competence from the nonverbal behaviors expressed by the person forming the impression. We conducted an experiment in which participants were watching impression stimuli. We measured participant's facial expressions, eye movements and physiological reactions (electrocardiography and galvanic skin response). To recognize impressions, we tested 2 multivariate regression models with the aforementioned multimodal recordings. Our best results demonstrate the possibility to detect impressions along warmth and competence dimensions with a concordance correlation coefficient of 0.838 and 0.864. Facial expressions and eye movements are more reliable for impression detection compared with physiological signals. Finally, the higher the Berkeley emotion expressivity scores the participants have, the more accurately the impressions are detected.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129441536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Detecting F-formations & Roles in Crowded Social Scenes with Wearables: Combining Proxemics & Dynamics using LSTMs 使用可穿戴设备在拥挤的社交场景中检测f形和角色:使用lstm结合近距学和动力学
Alessio Rosatelli, Ekin Gedik, H. Hung
{"title":"Detecting F-formations & Roles in Crowded Social Scenes with Wearables: Combining Proxemics & Dynamics using LSTMs","authors":"Alessio Rosatelli, Ekin Gedik, H. Hung","doi":"10.1109/ACIIW.2019.8925179","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925179","url":null,"abstract":"In this paper, we investigate the use of proxemics and dynamics for automatically identifying conversing groups, or so-called F-formations. More formally we aim to automatically identify whether wearable sensor data coming from 2 people is indicative of F-formation membership. We also explore the problem of jointly detecting membership and more descriptive information about the pair relating to the role they take in the conversation (i.e. speaker or listener). We jointly model the concepts of proxemics and dynamics using binary proximity and acceleration obtained through a single wearable sensor per person. We test our approaches on the publicly available MatchNMingle dataset which was collected during real-life mingling events. We find out that fusion of these two modalities performs significantly better than them independently, providing an AUC of 0.975 when data from 30-second windows are used. Furthermore, our investigation into roles detection shows that each role pair requires a different time resolution for accurate detection.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123675525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Physiological Signal- Driven Camera Using EOG, EEG, and ECG 生理信号驱动相机使用EOG, EEG和ECG
Yoshikazu Hirayama, T. Takashina, Y. Watanabe, Kensaku Fukumoto, Miyuki Yanagi, Ryota Horie, M. Ohkura
{"title":"Physiological Signal- Driven Camera Using EOG, EEG, and ECG","authors":"Yoshikazu Hirayama, T. Takashina, Y. Watanabe, Kensaku Fukumoto, Miyuki Yanagi, Ryota Horie, M. Ohkura","doi":"10.1109/ACIIW.2019.8925063","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925063","url":null,"abstract":"Photography is closely related to the affection of individuals. Camera is one of the promising applications of affective computing. Emotion is essentially a personal characteristic, and it should be dealt with individually and not generically. Therefore, physiological signals are expected to be effective in the field of photography. Previous researches have shown that physiological signals are promising, but a huge amount of physiological data is needed to confirm its effectiveness in realistic situations. We constructed prototypes to investigate the usefulness of this concept in daily life without waiting for the accumulation of a huge amount of physiological data. In the prototypes, we employ three types of physiological signals, namely EOG (electrooculography), EEG (electroencephalogram), and ECG (electrocardiogram). We plan to use these prototypes to evaluate the usefulness and effectiveness of physiological signals in photograpny in the real world.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125462946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synthesizing Physiological and Motion Data for Stress and Meditation Detection 综合生理和运动数据的压力和冥想检测
Md Taufeeq Uddin, Shaun J. Canavan
{"title":"Synthesizing Physiological and Motion Data for Stress and Meditation Detection","authors":"Md Taufeeq Uddin, Shaun J. Canavan","doi":"10.1109/ACIIW.2019.8925245","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925245","url":null,"abstract":"In this work, we present the synthesis of physiological and motion data to classify, detect and estimate affective state ahead of time (i.e. predict). We use raw physiological and motion signals to predict the next values of the signal following a temporal modeling scheme. The physiological signals are synthesized using a one-dimensional convolutional neural network. We then use a random forest to predict the affective state from the newly synthesized data. In our experimental design, we synthesize and predict both stress and mediation states. We show the utility of our approach to data synthesis for prediction of stress and meditation states through two methods. First, we report the concordance correlation coefficient of the synthetic signals compared to the ground truth. Secondly, we report prediction results on synthetic data that are comparable to the original ground-truth signals.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127956870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Deep Domain Adaptation for Facial Expression Analysis 面部表情分析的深度域自适应
Nikolai Kalischek, Patrick Thiam, Peter Bellmann, F. Schwenker
{"title":"Deep Domain Adaptation for Facial Expression Analysis","authors":"Nikolai Kalischek, Patrick Thiam, Peter Bellmann, F. Schwenker","doi":"10.1109/ACIIW.2019.8925055","DOIUrl":"https://doi.org/10.1109/ACIIW.2019.8925055","url":null,"abstract":"Deep learning has attracted a lot of attention in various fields over the past few years, including facial expression recognition. However, applying deep learning techniques to facial expression recognition is not straightforward. There are several drawbacks for successful deep expression recognition systems. Besides the lack of sufficient training data, facial expressions convey various inter-personal morphological and character differences. Therefore, an expression recognition network often suffers from overfitting and missing generalizability. However, multiple learning techniques, generally known as domain adaptation, have been proposed to address the lack of sufficient data and missing variance. Consequently, facial expression recognition may profit from domain adaptation. In this paper, we evaluate the applicability of deep domain adaptation for facial expression recognition. We describe two domain adaptation frameworks, one for single frame facial expression analysis and one for sequence-based facial expression analysis based on the Self-Ensembling method defined in [1]. The former is evaluated on the CK+ dataset [2], [3], the latter on the SenseEmotion database [4] of the University of Ulm. Our results indicate that domain adaptation is mostly applicable for person-specific facial expression recognition.","PeriodicalId":193568,"journal":{"name":"2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127808288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信