{"title":"When emotions don’t match: Effects of multimodal emotional misalignment in virtual streamers on viewer engagement","authors":"Menghan Duan , Qi Zhang , Yueyue Zhang , Cheng Zhang","doi":"10.1016/j.im.2025.104222","DOIUrl":null,"url":null,"abstract":"<div><div>Virtual streamers have been increasingly adopted in entertainment live streaming, yet the effectiveness of their social interactions remains unexplored. Owing to the avatar abstraction of faces and the limited flexibility of facial expression, they rely primarily on vocal tone and textual content to convey emotion. Drawing on the Elaboration Likelihood Model and Cognitive Tuning Theory, this study examines how streamers' cross-modal emotional misalignment between voice and text influences viewer engagement through streamer-viewer emotional synchrony. Using moment-to-moment data and machine learning–based emotion recognition techniques, we find that greater cross-modal emotional misalignment of streamers increases viewer engagement by heightening viewers’ emotional responses to vocal cues. Additionally, the positivity of the streamer’s vocal tone strengthens the effect of cross-modal emotional misalignment on vocal–emotional synchrony. Finally, we reveal the dual effects of cross-modal emotional misalignment on viewer consumption; while it increases short-term spending on paid comments and virtual gifting, it reduces long-term commitment in the form of premium subscriptions. Our study contributes to the research on live streaming and emotional interaction, and provides practical implications for designing emotionally intelligent virtual streamers.</div></div>","PeriodicalId":56291,"journal":{"name":"Information & Management","volume":"62 8","pages":"Article 104222"},"PeriodicalIF":8.2000,"publicationDate":"2025-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information & Management","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0378720625001259","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Virtual streamers have been increasingly adopted in entertainment live streaming, yet the effectiveness of their social interactions remains unexplored. Owing to the avatar abstraction of faces and the limited flexibility of facial expression, they rely primarily on vocal tone and textual content to convey emotion. Drawing on the Elaboration Likelihood Model and Cognitive Tuning Theory, this study examines how streamers' cross-modal emotional misalignment between voice and text influences viewer engagement through streamer-viewer emotional synchrony. Using moment-to-moment data and machine learning–based emotion recognition techniques, we find that greater cross-modal emotional misalignment of streamers increases viewer engagement by heightening viewers’ emotional responses to vocal cues. Additionally, the positivity of the streamer’s vocal tone strengthens the effect of cross-modal emotional misalignment on vocal–emotional synchrony. Finally, we reveal the dual effects of cross-modal emotional misalignment on viewer consumption; while it increases short-term spending on paid comments and virtual gifting, it reduces long-term commitment in the form of premium subscriptions. Our study contributes to the research on live streaming and emotional interaction, and provides practical implications for designing emotionally intelligent virtual streamers.
期刊介绍:
Information & Management is a publication that caters to researchers in the field of information systems as well as managers, professionals, administrators, and senior executives involved in designing, implementing, and managing Information Systems Applications.