2015 International Conference on Affective Computing and Intelligent Interaction (ACII)最新文献

筛选
英文 中文
Avatar and participant gender differences in the perception of uncanniness of virtual humans 虚拟角色和参与者对虚拟人类的不确定性感知的性别差异
Jacqueline D. Bailey
{"title":"Avatar and participant gender differences in the perception of uncanniness of virtual humans","authors":"Jacqueline D. Bailey","doi":"10.1109/ACII.2017.8273657","DOIUrl":"https://doi.org/10.1109/ACII.2017.8273657","url":null,"abstract":"The widespread use of avatars in training & simulation has expanded from entertainers to filling more serious roles. This change has emerged from the need to develop cost-effective & customizable avatars for interaction with trainees. While the use of avatars continues to expand, issues surrounding the impact of individual trainee factors on training outcomes, & how the design implications for avatars presented may interact with these factors, is not fully understood. Also, the uncanny valley has yet to be resolved, which may impair users' perception & acceptance of avatars & associated training scenarios. Gender has emerged as an important consideration when designing avatars, both in terms of gender differences in trainee perceptions, & the impact of avatars gender on these perceptions & experiences. The startle response of participants is measured to determine the participants' affective response to how pleasant the avatar is perceived, to ensure positive training outcomes.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"88 1","pages":"571-575"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86969277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Neural conditional ordinal random fields for agreement level estimation 协议水平估计的神经条件有序随机场
Nemanja Rakicevic, Ognjen Rudovic, Stavros Petridis, M. Pantic
{"title":"Neural conditional ordinal random fields for agreement level estimation","authors":"Nemanja Rakicevic, Ognjen Rudovic, Stavros Petridis, M. Pantic","doi":"10.1109/ICPR.2016.7899967","DOIUrl":"https://doi.org/10.1109/ICPR.2016.7899967","url":null,"abstract":"We present a novel approach to automated estimation of agreement intensity levels from facial images. To this end, we employ the MAHNOB Mimicry database of subjects recorded during dyadic interactions, where the facial images are annotated in terms of agreement intensity levels using the Likert scale (strong disagreement, disagreement, neutral, agreement and strong agreement). Dynamic modelling of the agreement levels is accomplished by means of a Conditional Ordinal Random Field model. Specifically, we propose a novel Neural Conditional Ordinal Random Field model that performs non-linear feature extraction from face images using the notion of Neural Networks, while also modelling temporal and ordinal relationships between the agreement levels. We show in our experiments that the proposed approach outperforms existing methods for modelling of sequential data. The preliminary results obtained on five subjects demonstrate that the intensity of agreement can successfully be estimated from facial images (39% F1 score) using the proposed method.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"24 1","pages":"885-890"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82820464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A data-driven validation of frontal EEG asymmetry using a consumer device 使用消费者设备对额叶脑电图不对称进行数据驱动验证
D. Friedman, Shai Shapira, L. Jacobson, M. Gruberger
{"title":"A data-driven validation of frontal EEG asymmetry using a consumer device","authors":"D. Friedman, Shai Shapira, L. Jacobson, M. Gruberger","doi":"10.1109/ACII.2015.7344686","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344686","url":null,"abstract":"Affective computing requires a reliable method to obtain real time information regarding affective state, and one of the promising avenues is via electroencephalography (EEG). We have performed a study intended to test whether a low cost EEG device targeted at consumers can be used to measure extreme emotional valence. One of the most studied frameworks related to the way affect is reflected in EEG is based on frontal hemispheric asymmetry. Our results indicate that a simple replication of the methods derived from this hypothesis might not be sufficient. However, using a data-driven approach based on feature engineering and machine learning, we describe a method that can reliably measure valence with the EPOC device. We discuss our study in the context of the theoretical and empirical background for frontal asymmetry.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"7 1","pages":"930-937"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75186133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Dynamic time warping: A single dry electrode EEG study in a self-paced learning task 动态时间扭曲:自定节奏学习任务的单干电极脑电图研究
T. Yamauchi, Kunchen Xiao, Casady Bowman, A. Mueen
{"title":"Dynamic time warping: A single dry electrode EEG study in a self-paced learning task","authors":"T. Yamauchi, Kunchen Xiao, Casady Bowman, A. Mueen","doi":"10.1109/ACII.2015.7344551","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344551","url":null,"abstract":"This study investigates dynamic time warping (DTW) as a possible analysis method for EEG-based affective computing in a self-paced learning task in which inter- and intra-personal differences are large. In one experiment, participants (N=200) carried out an implicit category learning task where their frontal EEG signals were collected throughout the experiment. Using DTW, we measured the dissimilarity distances of EEG signals between participants and examined the extent to which a k-Nearest Neighbors algorithm could predict self-rated feelings of a participant from signals taken from other participants (between-participants prediction). Results showed that DTW provides potentially useful characteristics for EEG data analysis in a heterogeneous setting. In particular, theory-based segmentation of time-series data were particularly useful for DTW analysis while smoothing and standardization were detrimental when applied in a self-paced learning task.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"20 1","pages":"56-62"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72966050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
MoodTracker: Monitoring collective emotions in the workplace MoodTracker:监测工作场所的集体情绪
Yuliya Lutchyn, Paul Johns, A. Roseway, M. Czerwinski
{"title":"MoodTracker: Monitoring collective emotions in the workplace","authors":"Yuliya Lutchyn, Paul Johns, A. Roseway, M. Czerwinski","doi":"10.1109/ACII.2015.7344586","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344586","url":null,"abstract":"Accurate and timely assessment of collective emotions in the workplace is a critical managerial task. However, perceptual, normative, and methodological challenges make it very difficult even for the most experienced organizational leaders. In this paper we present a MoodTracker - a technological solution that can help to overcome these challenges, and facilitate a continuous monitoring of the collective emotions in large groups in real-time. The MoodTracker is a program that runs on any PC device, and provides users with an interface for self-report of their affect. The device was tested in situ for four weeks, during which we received over 3000 emotion self-reports. Based on the usage data, we concluded that users had a positive attitude toward the MoodTracker and favorably evaluated its utility. From the collected data we were also able to establish some patterns of weekly and daily variations of employees' emotions in the workplace. We discuss practical applications and suggest directions for future development.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"176 1","pages":"295-301"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73204967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Synestouch: Haptic + audio affective design for wearable devices Synestouch:可穿戴设备的触觉+音频情感设计
P. Paredes, Ryuka Ko, Arezu Aghaseyedjavadi, J. Chuang, J. Canny, Linda Babler
{"title":"Synestouch: Haptic + audio affective design for wearable devices","authors":"P. Paredes, Ryuka Ko, Arezu Aghaseyedjavadi, J. Chuang, J. Canny, Linda Babler","doi":"10.1109/ACII.2015.7344630","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344630","url":null,"abstract":"Little is known about the affective expressivity of multisensory stimuli in wearable devices. While the theory of emotion has referenced single stimulus and multisensory experiments, it does not go further to explain the potential effects of sensorial stimuli when utilized in combination. In this paper, we present an analysis of the combinations of two sensory modalities - haptic (more specifically, vibrotactile) stimuli and auditory stimuli. We present the design of a wrist-worn wearable prototype and empirical data from a controlled experiment (N=40) and analyze emotional responses from a dimensional (arousal + valence) perspective. Differences are exposed between “matching” the emotions expressed through each modality, versus \"mixing\" auditory and haptic stimuli each expressing different emotions. We compare the effects of each condition to determine, for example, if the matching of two negative stimuli emotions will render a higher negative effect than the mixing of two mismatching emotions. The main research question that we study is: When haptic and auditory stimuli are combined, is there an interaction effect between the emotional type and the modality of the stimuli? We present quantitative and qualitative data to support our hypotheses, and complement it with a usability study to investigate the potential uses of the different modes. We conclude by discussing the implications for the design of affective interactions for wearable devices.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"92 28 1","pages":"595-601"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77718024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
On rater reliability and agreement based dynamic active learning 基于协议的动态主动学习的可靠性研究
Yue Zhang, E. Coutinho, Björn Schuller, Zixing Zhang, M. Adam
{"title":"On rater reliability and agreement based dynamic active learning","authors":"Yue Zhang, E. Coutinho, Björn Schuller, Zixing Zhang, M. Adam","doi":"10.1109/ACII.2015.7344553","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344553","url":null,"abstract":"In this paper, we propose two novel Dynamic Active Learning (DAL) methods with the aim of ultimately reducing the costly human labelling work for subjective tasks such as speech emotion recognition. Compared to conventional Active Learning (AL) algorithms, the proposed DAL approaches employ a highly efficient adaptive query strategy that minimises the number of annotations through three advancements. First, we shift from the standard majority voting procedure, in which unlabelled instances are annotated by a fixed number of raters, to an agreement-based annotation technique that dynamically determines how many human annotators are required to label a selected instance. Second, we introduce the concept of the order-based DAL algorithm by considering rater reliability and inter-rater agreement. Third, a highly dynamic development trend is successfully implemented by upgrading the agreement levels depending on the prediction uncertainty. In extensive experiments on standardised test-beds, we show that the new dynamic methods significantly improve the efficiency of the existing AL algorithms by reducing human labelling effort up to 85.41%, while achieving the same classification accuracy. Thus, the enhanced DAL derivations opens up high-potential research directions for the utmost exploitation of unlabelled data.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"13 1","pages":"70-76"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81551658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
EmoShapelets: Capturing local dynamics of audio-visual affective speech EmoShapelets:捕捉视听情感语音的局部动态
Y. Shangguan, E. Provost
{"title":"EmoShapelets: Capturing local dynamics of audio-visual affective speech","authors":"Y. Shangguan, E. Provost","doi":"10.1109/ACII.2015.7344576","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344576","url":null,"abstract":"Automatic recognition of emotion in speech is an active area of research. One of the important open challenges relates to how the emotional characteristics of speech change in time. Past research has demonstrated the importance of capturing global dynamics (across an entire utterance) and local dynamics (within segments of an utterance). In this paper, we propose a novel concept, EmoShapelets, to capture the local dynamics in speech. EmoShapelets capture changes in emotion that occur within utterances. We propose a framework to generate, update, and select EmoShapelets. We also demonstrate the discriminative power of EmoShapelets by using them with various classifiers to achieve comparable results with the state-of-the-art systems on the IEMOCAP dataset. EmoShapelets can serve as basic units of emotion expression and provide additional evidence supporting the existence of local patterns of emotion underlying human communication.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"32 1","pages":"229-235"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82806884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Multimodal approach for automatic recognition of machiavellianism 马基雅维利主义自动识别的多模态方法
Zahra Nazari, Gale M. Lucas, J. Gratch
{"title":"Multimodal approach for automatic recognition of machiavellianism","authors":"Zahra Nazari, Gale M. Lucas, J. Gratch","doi":"10.1109/ACII.2015.7344574","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344574","url":null,"abstract":"Machiavellianism, by definition, is the tendency to use other people as a tool to achieve one's own goals. Despite the large focus on the Big Five traits of personality, this anti-social trait is relatively unexplored in the computational realm. Automatically recognizing anti-social traits can have important uses across a variety of applications. In this paper, we use negotiation as a setting that provides Machiavellians with the opportunity to reveal their exploitative inclinations. We use textual, visual, acoustic, and behavioral cues to automatically predict High vs. Low Machiavellian personalities. These learned models have good accuracy when compared with other personality-recognition methods, and we provide evidence that the automatically-learned models are consistent with existing literature on this anti-social trait, giving evidence that these results can generalize to other domains.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"56 23","pages":"215-221"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91420578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Facial expression recognition with multithreaded cascade of rotation-invariant HOG 基于旋转不变HOG的多线程级联面部表情识别
Jinhui Chen, T. Takiguchi, Y. Ariki
{"title":"Facial expression recognition with multithreaded cascade of rotation-invariant HOG","authors":"Jinhui Chen, T. Takiguchi, Y. Ariki","doi":"10.1109/ACII.2015.7344636","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344636","url":null,"abstract":"We propose a novel and general framework, named the multithreading cascade of rotation-invariant histograms of oriented gradients (McRiHOG) for facial expression recognition (FER). In this paper, we attempt to solve two problems about high-quality local feature descriptors and robust classifying algorithm for FER. The first solution is that we adopt annular spatial bins type HOG (Histograms of Oriented Gradients) descriptors to describe local patches. In this way, it significantly enhances the descriptors in regard to rotation-invariant ability and feature description accuracy; The second one is that we use a novel multithreading cascade to simultaneously learn multiclass data. Multithreading cascade is implemented through non-interfering boosting channels, which are respectively built to train weak classifiers for each expression. The superiority of McRiHOG over current state-of-the-art methods is clearly demonstrated by evaluation experiments based on three popular public databases, CK+, MMI, and AFEW.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"20 1","pages":"636-642"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88182701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信