N. Vryzas, A. Liatsou, Rigas Kotsakis, Charalampos A. Dimoulas, George M. Kalliris
{"title":"Augmenting Drama: A Speech Emotion - Controlled Stage Lighting Framework","authors":"N. Vryzas, A. Liatsou, Rigas Kotsakis, Charalampos A. Dimoulas, George M. Kalliris","doi":"10.1145/3123514.3123557","DOIUrl":null,"url":null,"abstract":"Lighting can play a key role in the aesthetic concept of a theatrical production. This paper explores the field of augmented interaction with stage lighting, providing a synesthetic approach to emotion perception. In the audio driven framework that is presented, the actors' speech is captured by stage microphones. The signals are led to a Speech Emotion Recognition system that classifies them by emotion, which are thereafter matched to different colors. Thus, stage lighting color can change in real-time in accordance with the actor's recognized speech emotion. The system is described in a generic form, suitable for different implementations of the main idea. For the purpose of this paper, 5 classes that represent different emotions were defined. Several audio features and classifiers were tested with audio data from different datasets of emotional speech to train a speech emotion recognition model. The final evaluation results are presented for the logistic regression classifier. Accuracy results and confusion matrix are presented for logistic regression classification. A wheel of emotions model for emotion visualization and color selection was adopted to render and simulate the colored lighting results.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"69 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3123514.3123557","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
Lighting can play a key role in the aesthetic concept of a theatrical production. This paper explores the field of augmented interaction with stage lighting, providing a synesthetic approach to emotion perception. In the audio driven framework that is presented, the actors' speech is captured by stage microphones. The signals are led to a Speech Emotion Recognition system that classifies them by emotion, which are thereafter matched to different colors. Thus, stage lighting color can change in real-time in accordance with the actor's recognized speech emotion. The system is described in a generic form, suitable for different implementations of the main idea. For the purpose of this paper, 5 classes that represent different emotions were defined. Several audio features and classifiers were tested with audio data from different datasets of emotional speech to train a speech emotion recognition model. The final evaluation results are presented for the logistic regression classifier. Accuracy results and confusion matrix are presented for logistic regression classification. A wheel of emotions model for emotion visualization and color selection was adopted to render and simulate the colored lighting results.