{"title":"A structural model of semiotic alignment: The classification of multimodal ensembles as a novel machine learning task","authors":"Alexander Mehler, Andy Lucking","doi":"10.1109/AFRCON.2009.5308098","DOIUrl":null,"url":null,"abstract":"In addition to the well-known linguistic alignment processes in dyadic communication — e.g., phonetic, syntactic, semantic alignment — we provide evidence for a genuine multimodal alignment process, namely semiotic alignment. Communicative elements from different modalities “routinize into” cross-modal “super-signs”, which we call multimodal ensembles. Computational models of human communication are in need of expressive models of multimodal ensembles. In this paper, we exemplify semiotic alignment by means of empirical examples of the building of multimodal ensembles. We then propose a graph model of multimodal dialogue that is expressive enough to capture multimodal ensembles. In line with this model, we define a novel task in machine learning with the aim of training classifiers that can detect semiotic alignment in dialogue. This model is in support of approaches which need to gain insights into realistic human-machine communication.","PeriodicalId":122830,"journal":{"name":"AFRICON 2009","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AFRICON 2009","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AFRCON.2009.5308098","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
In addition to the well-known linguistic alignment processes in dyadic communication — e.g., phonetic, syntactic, semantic alignment — we provide evidence for a genuine multimodal alignment process, namely semiotic alignment. Communicative elements from different modalities “routinize into” cross-modal “super-signs”, which we call multimodal ensembles. Computational models of human communication are in need of expressive models of multimodal ensembles. In this paper, we exemplify semiotic alignment by means of empirical examples of the building of multimodal ensembles. We then propose a graph model of multimodal dialogue that is expressive enough to capture multimodal ensembles. In line with this model, we define a novel task in machine learning with the aim of training classifiers that can detect semiotic alignment in dialogue. This model is in support of approaches which need to gain insights into realistic human-machine communication.