Valeria Filippou, Nikolas Theodosiou, M. Nicolaou, E. Constantinou, G. Panayiotou, Marios Theodorou
{"title":"A Wavelet-based Approach for Multimodal Prediction of Alexithymia from Physiological Signals","authors":"Valeria Filippou, Nikolas Theodosiou, M. Nicolaou, E. Constantinou, G. Panayiotou, Marios Theodorou","doi":"10.1145/3536220.3558076","DOIUrl":null,"url":null,"abstract":"Alexithymia is a trait reflecting a person’s difficulty in identifying and expressing their emotions that has been linked to various forms of psychopathology. The identification of alexithymia might have therapeutic, preventive and diagnostic benefits. However, not much research has been done in proposing predictive models for alexithymia, while literature on multimodal approaches is virtually non-existent. In this light, we present, to the best of our knowledge, the first predictive framework that leverages multimodal physiological signals (heart rate, skin conductance level, facial electromyograms) to detect alexithymia. In particular, we develop a set of features that primarily capture spectral-information that is also localized in the time domain via wavelets. Subsequently, simple classifiers are utilized that can learn correlations between features extracted from all modalities. Via several experiments on a novel dataset collected via an emotion processing imagery experiment, we further show that (i) one can detect alexithymia in patients using only one stage of the experiment (elicitation of joy), and (ii) that our simpler framework outperforms compared methods, including deep networks, on the task of alexithymia detection. Our proposed method achieves an accuracy of up to 92% when using simple classifiers on specific imagery tasks. The simplicity and efficiency of our approach makes it suitable for low-powered embedded devices.","PeriodicalId":186796,"journal":{"name":"Companion Publication of the 2022 International Conference on Multimodal Interaction","volume":"42 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Companion Publication of the 2022 International Conference on Multimodal Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3536220.3558076","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Alexithymia is a trait reflecting a person’s difficulty in identifying and expressing their emotions that has been linked to various forms of psychopathology. The identification of alexithymia might have therapeutic, preventive and diagnostic benefits. However, not much research has been done in proposing predictive models for alexithymia, while literature on multimodal approaches is virtually non-existent. In this light, we present, to the best of our knowledge, the first predictive framework that leverages multimodal physiological signals (heart rate, skin conductance level, facial electromyograms) to detect alexithymia. In particular, we develop a set of features that primarily capture spectral-information that is also localized in the time domain via wavelets. Subsequently, simple classifiers are utilized that can learn correlations between features extracted from all modalities. Via several experiments on a novel dataset collected via an emotion processing imagery experiment, we further show that (i) one can detect alexithymia in patients using only one stage of the experiment (elicitation of joy), and (ii) that our simpler framework outperforms compared methods, including deep networks, on the task of alexithymia detection. Our proposed method achieves an accuracy of up to 92% when using simple classifiers on specific imagery tasks. The simplicity and efficiency of our approach makes it suitable for low-powered embedded devices.