Buşra Karatay, Deniz Bestepe, Kashfia Sailunaz, T. Ozyer, R. Alhajj
{"title":"A Multi-Modal Emotion Recognition System Based on CNN-Transformer Deep Learning Technique","authors":"Buşra Karatay, Deniz Bestepe, Kashfia Sailunaz, T. Ozyer, R. Alhajj","doi":"10.1109/CDMA54072.2022.00029","DOIUrl":null,"url":null,"abstract":"Emotion analysis is a subject that researchers from various fields have been working on for a long time. Different emotion detection methods have been developed for text, audio, photography, and video domains. Automated emotion detection methods using machine learning and deep learning models from videos and pictures have been an interesting topic for researchers. In this paper, a deep learning framework, in which CNN and Transformer models are combined, that classifies emotions using facial and body features extracted from videos is proposed. Facial and body features were extracted using OpenPose, and in the data preprocessing stage 2 operations such as new video creation and frame selection were tried. The experiments were conducted on two datasets, FABO and CK+. Our framework outperformed similar deep learning models with 99% classification accuracy for the FABO dataset, and showed remarkable performance over 90% accuracy for most versions of the framework for both the FABO and CK+ dataset.","PeriodicalId":313042,"journal":{"name":"2022 7th International Conference on Data Science and Machine Learning Applications (CDMA)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 7th International Conference on Data Science and Machine Learning Applications (CDMA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CDMA54072.2022.00029","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Emotion analysis is a subject that researchers from various fields have been working on for a long time. Different emotion detection methods have been developed for text, audio, photography, and video domains. Automated emotion detection methods using machine learning and deep learning models from videos and pictures have been an interesting topic for researchers. In this paper, a deep learning framework, in which CNN and Transformer models are combined, that classifies emotions using facial and body features extracted from videos is proposed. Facial and body features were extracted using OpenPose, and in the data preprocessing stage 2 operations such as new video creation and frame selection were tried. The experiments were conducted on two datasets, FABO and CK+. Our framework outperformed similar deep learning models with 99% classification accuracy for the FABO dataset, and showed remarkable performance over 90% accuracy for most versions of the framework for both the FABO and CK+ dataset.