{"title":"AlterEcho: Loose Avatar-Streamer Coupling for Expressive VTubing","authors":"Man To Tang, Victor Long Zhu, V. Popescu","doi":"10.1109/ismar52148.2021.00027","DOIUrl":null,"url":null,"abstract":"VTubers are live streamers who embody computer animation virtual avatars. VTubing is a rapidly rising form of online entertainment in East Asia, most notably in Japan and China, and it has been more recently introduced in the West. However, animating an expressive VTuber avatar remains a challenge due to budget and usability limitations of current solutions, i.e., high-fidelity motion capture is expensive, while keyboard-based VTubing interfaces impose a cognitive burden on the streamer. This paper proposes a novel approach for VTubing animation based on the key principle of loosening the coupling between the VTuber and their avatar, and it describes a first implementation of the approach in the AlterEcho VTubing animation system. AlterEcho generates expressive VTuber avatar animation automatically, without the streamer’s explicit intervention; it breaks the strict tethering of the avatar from the streamer, allowing the avatar’s nonverbal behavior to deviate from that of the streamer. Without the complete independence of a true alter ego, but also without the constraint of mirroring the streamer with the fidelity of an echo, AlterEcho produces avatar animations that have been rated significantly higher by VTubers and viewers (N = 315) compared to animations created using simple motion capture, or using VMagicMirror, a state-of-the-art keyboard-based VTubing system. Our work also opens the door to personalizing the avatar persona for individual viewers.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ismar52148.2021.00027","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10
Abstract
VTubers are live streamers who embody computer animation virtual avatars. VTubing is a rapidly rising form of online entertainment in East Asia, most notably in Japan and China, and it has been more recently introduced in the West. However, animating an expressive VTuber avatar remains a challenge due to budget and usability limitations of current solutions, i.e., high-fidelity motion capture is expensive, while keyboard-based VTubing interfaces impose a cognitive burden on the streamer. This paper proposes a novel approach for VTubing animation based on the key principle of loosening the coupling between the VTuber and their avatar, and it describes a first implementation of the approach in the AlterEcho VTubing animation system. AlterEcho generates expressive VTuber avatar animation automatically, without the streamer’s explicit intervention; it breaks the strict tethering of the avatar from the streamer, allowing the avatar’s nonverbal behavior to deviate from that of the streamer. Without the complete independence of a true alter ego, but also without the constraint of mirroring the streamer with the fidelity of an echo, AlterEcho produces avatar animations that have been rated significantly higher by VTubers and viewers (N = 315) compared to animations created using simple motion capture, or using VMagicMirror, a state-of-the-art keyboard-based VTubing system. Our work also opens the door to personalizing the avatar persona for individual viewers.