2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)最新文献

筛选
英文 中文
Robot mirroring: Improving well-being by fostering empathy with an artificial agent representing the self 机器人镜像:通过培养与代表自我的人工代理的同理心来改善幸福感
David Antonio Gómez Jáuregui, Felix Dollack, Monica Perusquía-Hernández
{"title":"Robot mirroring: Improving well-being by fostering empathy with an artificial agent representing the self","authors":"David Antonio Gómez Jáuregui, Felix Dollack, Monica Perusquía-Hernández","doi":"10.1109/aciiw52867.2021.9666320","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666320","url":null,"abstract":"Well-being has become a major societal goal. Being well means being physically and mentally healthy. Additionally, feeling empowered is also a component of well-being. Recently, self-tracking has been proposed as means to achieve increased awareness, thus, giving the opportunity to identify and decrease undesired behaviours. However, inappropriately communicated self-tracking results might cause the opposite effect. To address this, a subtle self-tracking feedback by mirroring the self's state into an embodied artificial agent has been proposed. By eliciting empathy towards the artificial agent and fostering helping behaviours, users would help themselves as well. We searched the literature to find supporting or opposing evidence for the robot mirroring framework. The results showed an increasing interest in self-tracking technologies for well-being management. Current discussions disseminate what can be achieved with different levels of automation; the type and relevance of feedback; and the role that artificial agents, such as chatbots and robots, might play to support people's therapies. These findings support further development of the robot mirroring framework to improve medical, hedonic, and eudaemonic well-being.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123823659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Job Interview Training System using Multimodal Behavior Analysis 基于多模态行为分析的求职面试培训系统
Nao Takeuchi, Tomoko Koda
{"title":"Job Interview Training System using Multimodal Behavior Analysis","authors":"Nao Takeuchi, Tomoko Koda","doi":"10.1109/aciiw52867.2021.9666270","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666270","url":null,"abstract":"The paper introduces our system that recognizes the nonverbal behaviors of an interviewee, namely gaze, facial expression, and posture using a Tobii eye tracker and cameras. The system compares the recognition results with those of models of exemplary nonverbal behaviors of an interviewee and highlights the behaviors that need improvement while playing back the interview recording. The development goal for our system was to construct an inexpensive and easy-to-use system using commercially available HWs, open-source code, and a CG agent that would provide feedback to the interviewee. The results of the initial evaluation of the system indicate that improvements in the recognition accuracy of nonverbal behaviors and the quality of the interaction with the CG agent are needed.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"2013 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127381446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Keep it Simple: Handcrafting Feature and Tuning Random Forests and XGBoost to face the Affective Movement Recognition Challenge 2021 保持简单:手工制作功能和调整随机森林和XGBoost来面对2021年的情感运动识别挑战
Vincenzo D'Amato, L. Oneto, A. Camurri, D. Anguita
{"title":"Keep it Simple: Handcrafting Feature and Tuning Random Forests and XGBoost to face the Affective Movement Recognition Challenge 2021","authors":"Vincenzo D'Amato, L. Oneto, A. Camurri, D. Anguita","doi":"10.1109/aciiw52867.2021.9666428","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666428","url":null,"abstract":"In this paper, we face the Affective Movement Recognition Challenge 2021 which is based on 3 naturalistic datasets on body movement, which is a fundamental component of everyday living both in the execution of the actions that make up physical functioning as well as in rich expression of affect, cognition, and intent. The datasets were built on deep understanding of the requirements of automatic detection technology for chronic pain physical rehabilitation, maths problem solving, and interactive dance contexts respectively. In particular, we will rely on a single, simple yet effective, approach able to be competitive with state-of-the-art results in the literature on all of the 3 datasets. Our approach is based on a two step procedure: first we will carefully handcraft features able to fully and synthetically represent the raw data and then we will apply Random Forest and XGBoost, carefully tuned with rigorous statistical procedures, on top of it to deliver the predictions. As requested by the challenge, we will report results in terms of three different metrics: accuracy, F1-score, and Matthew Correlation Coefficient.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124053699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
HirePreter: A Framework for Providing Fine-grained Interpretation for Automated Job Interview Analysis HirePreter:为自动化面试分析提供细粒度解释的框架
Wasifur Rahman, Sazan Mahbub, Asif Salekin, M. Hasan, E. Hoque
{"title":"HirePreter: A Framework for Providing Fine-grained Interpretation for Automated Job Interview Analysis","authors":"Wasifur Rahman, Sazan Mahbub, Asif Salekin, M. Hasan, E. Hoque","doi":"10.1109/aciiw52867.2021.9666201","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666201","url":null,"abstract":"There has been a rise in automated technologies to screen potential job applicants through affective signals captured from video-based interviews. These tools can make the interview process scalable and objective, but they often provide little to no information of how the machine learning model is making crucial decisions that impacts the livelihood of thousands of people. We built an ensemble model – by combining Multiple-Instance-Learning and Language-Modeling based models – that can predict whether an interviewee should be hired or not. Using both model-specific and model-agnostic interpretation techniques, we can decipher the most informative time-segments and features driving the model's decision making. Our analysis also shows that our models are significantly impacted by the beginning and ending portions of the video. Our model achieves 75.3% accuracy in predicting whether an interviewee should be hired on the ETS Job Interview dataset. Our approach can be extended to interpret other video-based affective computing tasks like analyzing sentiment, measuring credibility, or coaching individuals to collaborate more effectively in a team.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128100312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Emotions in Socio-cultural Interactive AI Agents 社会文化互动AI代理中的情感
A. Malhotra, J. Hoey
{"title":"Emotions in Socio-cultural Interactive AI Agents","authors":"A. Malhotra, J. Hoey","doi":"10.1109/aciiw52867.2021.9666252","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666252","url":null,"abstract":"With the advancement of AI and Robotics, computer systems have been put to many practical uses in a variety of domains like healthcare, retail, households, and more. As AI agents become a part of our day-to-day life, successful human-machine interaction becomes an essential part of the experience. Understanding the nuances of human social interaction remains a challenging area of research, but there is growing consensus that emotional identity, or what social face a person presents in a given context, is a critical aspect. Therefore, understanding the identities displayed by humans, and the identity of agents and the social context, is a crucial skill for a socially interactive agent. In this paper, we provide an overview of a sociological theory of interaction called Affect Control Theory (ACT), and its recent extension, BayesACT. We discuss how this theory can track fine grained dynamics of an interaction, and explore how the associated computational model of emotion can be used by socially interactive agents. ACT considers the cultural sentiments (emotional feelings) about concepts for the context, the identities at play, and the emotions felt, and aims towards a successful interaction with the aim of maximizing emotional coherence. We argue that an AI agent's understanding of itself, and of the culture and context it is in, can change human perception of an agent from something that is machine-like, to something that can establish and maintain a meaningful emotional connection.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133827809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
emoPaint: Exploring Emotion and Art in a VR-based Creativity Tool emoPaint:在基于vr的创意工具中探索情感和艺术
Jungah Son
{"title":"emoPaint: Exploring Emotion and Art in a VR-based Creativity Tool","authors":"Jungah Son","doi":"10.1109/aciiw52867.2021.9666398","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666398","url":null,"abstract":"I present emoPaint, a painting application that allows users to create paintings expressive of human emotions with the range of visual elements. While previous systems have introduced painting in 3D space, emoPaint focuses on supporting emotional characteristics by providing pre-made emotion brushes to users and allowing them to subsequently change the expressive properties of their paintings. Pre-made emotion brushes include art elements such as line textures, shape parameters and color palettes. This enables users to control expression of emotions in their paintings. I describe my implementation and illustrate paintings created using emoPaint.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"80 20","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113933303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discrete versus Ordinal Time-Continuous Believability Assessment 离散与有序时间-连续可信性评估
Cristiana Pacheco, Dávid Melhárt, Antonios Liapis, Georgios N. Yannakakis, Diego Pérez-Liébana
{"title":"Discrete versus Ordinal Time-Continuous Believability Assessment","authors":"Cristiana Pacheco, Dávid Melhárt, Antonios Liapis, Georgios N. Yannakakis, Diego Pérez-Liébana","doi":"10.1109/aciiw52867.2021.9666288","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666288","url":null,"abstract":"What is believability? And how do we assess it? These questions remain a challenge in human-computer interaction and games research. When assessing the believability of agents, researchers opt for an overall view of believability reminiscent of the Turing test. Current evaluation approaches have proven to be diverse and, thus, have yet to establish a framework. In this paper, we propose treating believability as a time-continuous phenomenon. We have conducted a study in which participants play a one-versus-one shooter game and annotate the character's believability. They face two different opponents which present different behaviours. In this novel process, these annotations are done moment-to-moment using two different annotation schemes: BTrace and RankTrace. This is followed by the user's believability preference between the two playthroughs, effectively allowing us to compare the two annotation tools and time-continuous assessment with discrete assessment. Results suggest that a binary annotation tool could be more intuitive to use than its continuous counterpart and provides more information on context. We conclude that this method may offer a necessary addition to current assessment techniques.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113997496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling the Induction of Psychosocial Stress in Virtual Reality Simulations 虚拟现实模拟中心理社会压力诱导的建模
Celia Kessassi
{"title":"Modeling the Induction of Psychosocial Stress in Virtual Reality Simulations","authors":"Celia Kessassi","doi":"10.1109/aciiw52867.2021.9666443","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666443","url":null,"abstract":"During the last few years, a wide number of virtual reality applications dealing with psychosocial stress have emerged. However, our current understanding of stress and psychosocial stress in virtual reality hinders our ability to finely control stress induction. In my PhD project I plan to develop a computational model which will describe the respective impact of each factor inducing psychosocial stress, including virtual reality factors, personal factors and other situational factors.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123483823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of Deep Learning Approaches for Protective Behaviour Detection Under Class Imbalance from MoCap and EMG data 基于动作捕捉和肌电图数据的类不平衡保护行为检测的深度学习方法比较
Karim Radouane, Andon Tchechmedjiev, Binbin Xu, S. Harispe
{"title":"Comparison of Deep Learning Approaches for Protective Behaviour Detection Under Class Imbalance from MoCap and EMG data","authors":"Karim Radouane, Andon Tchechmedjiev, Binbin Xu, S. Harispe","doi":"10.1109/aciiw52867.2021.9666417","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666417","url":null,"abstract":"The AffecMove challenge organised in the context of the H2020 EnTimeMent project offers three tasks of movement classification in realistic settings and use-cases. Our team, from the EuroMov DHM laboratory participated in Task 1, for protective behaviour (against pain) detection from motion capture data and EMG, in patients suffering from pain-inducing muskuloskeletal disorders. We implemented two simple baseline systems, one LSTM system with pre-training (NTU-60) and a Transformer. We also adapted PA-ResGCN a Graph Convolutional Network for skeleton-based action classification showing state-of-the-art (SOTA) performance to protective behaviour detection, augmented with strategies to handle class-imbalance. For PA-ResGCN-N51 we explored naïve fusion strategies with an EMG-only convolutional neural network that didn't improve the overall performance. Unsurprisingly, the best performing system was PA-ResGCN-N51 (w/o EMG) with a F1 score of 53.36% on the test set for the minority class (MCC 0.4247). The Transformer baseline (MoCap + EMG) came second at 41.05% F1 test performance (MCC 0.3523) and the LSTM baseline third at 31.16% F1 (MCC 0.1763). On the validation set the LSTM showed performance comparable to PA-ResGCN, we hypothesize that the LSTM over-fitted on the validation set that wasn't very representative of the train/test distribution.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124090928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Implementing Parallel and Independent Movements for a Social Robot's Affective Expressions 社交机器人情感表达的平行独立运动实现
Hannes Ritschel, Thomas Kiderle, E. André
{"title":"Implementing Parallel and Independent Movements for a Social Robot's Affective Expressions","authors":"Hannes Ritschel, Thomas Kiderle, E. André","doi":"10.1109/aciiw52867.2021.9666341","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666341","url":null,"abstract":"The design and playback of natural and believable movements is a challenge for social robots. They have several limitations due to their physical embodiment, and sometimes also with regard to their software. Taking the example of the expression of happiness, we present an approach for implementing parallel and independent movements for a social robot, which does not have a full-fledged animation API. The technique is able to create more complex movement sequences than a typical sequential playback of poses and utterances and thus is better suited for expression of affect and nonverbal behaviors.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128591001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信