Lovejit Singh, Sarbjeet Singh, N. Aggarwal, Ranjit Singh, Gagan Singla
{"title":"An Efficient Temporal Feature Aggregation of Audio-Video Signals for Human Emotion Recognition","authors":"Lovejit Singh, Sarbjeet Singh, N. Aggarwal, Ranjit Singh, Gagan Singla","doi":"10.1109/ISPCC53510.2021.9609528","DOIUrl":null,"url":null,"abstract":"Due to the significance of human behavioral intelligence in computing devices, this work focused on the facial expressions and speech of humans for their emotion recognition in multimodal (audio-video) signals. The audio-video signals consist of frames to represent the temporal activities of facial expressions and speech of humans. It become challenging to determine the efficient method to construct a spatial and temporal feature vector from the frame-wise spatial feature descriptor to describe the facial expressions and speech temporal information in audio-video signals. In this paper, an efficient temporal feature aggregation method is presented for human emotion recognition in audio-video signals. The Local Binary Pattern (LBP) feature of facial expressions and Mel Frequency Cepstral Coefficients (MFCCs) and its $\\Delta+\\Delta\\Delta$ of speech are computed from each frame. The experiment analysis is performed to decide the efficient method for temporal feature aggregation, i.e., sum normalization or statistical functions, to construct a spatial and temporal feature vector. The multiclass Support Vector Machine (SVM) classification model is trained and tested to evaluate the performance of temporal feature aggregation method with LBP features and MFCCs and its $\\Delta+\\Delta\\Delta$ features. The Bayesian optimization (BO) method determines the optimal hyper-parameters of the multiclass SVM classifier for emotion detection. The experiment analysis of proposed work is performed on publicly accessible and challenging Crowd-sourced Emotional Multimodal Actors-Dataset (CREMA-D) and compared with existing work.","PeriodicalId":113266,"journal":{"name":"2021 6th International Conference on Signal Processing, Computing and Control (ISPCC)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 6th International Conference on Signal Processing, Computing and Control (ISPCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISPCC53510.2021.9609528","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Due to the significance of human behavioral intelligence in computing devices, this work focused on the facial expressions and speech of humans for their emotion recognition in multimodal (audio-video) signals. The audio-video signals consist of frames to represent the temporal activities of facial expressions and speech of humans. It become challenging to determine the efficient method to construct a spatial and temporal feature vector from the frame-wise spatial feature descriptor to describe the facial expressions and speech temporal information in audio-video signals. In this paper, an efficient temporal feature aggregation method is presented for human emotion recognition in audio-video signals. The Local Binary Pattern (LBP) feature of facial expressions and Mel Frequency Cepstral Coefficients (MFCCs) and its $\Delta+\Delta\Delta$ of speech are computed from each frame. The experiment analysis is performed to decide the efficient method for temporal feature aggregation, i.e., sum normalization or statistical functions, to construct a spatial and temporal feature vector. The multiclass Support Vector Machine (SVM) classification model is trained and tested to evaluate the performance of temporal feature aggregation method with LBP features and MFCCs and its $\Delta+\Delta\Delta$ features. The Bayesian optimization (BO) method determines the optimal hyper-parameters of the multiclass SVM classifier for emotion detection. The experiment analysis of proposed work is performed on publicly accessible and challenging Crowd-sourced Emotional Multimodal Actors-Dataset (CREMA-D) and compared with existing work.