{"title":"Mouth gesture based emotion awareness and interaction in virtual reality","authors":"Xing Zhang, U. Ciftci, L. Yin","doi":"10.1145/2787626.2787635","DOIUrl":null,"url":null,"abstract":"In recent years, Virtual Reality (VR) has become a new media to provide users an immersive experience. Events happening in the VR connect closer to our emotions as compared to other interfaces. The emotion variations are reflected as our facial expressions. However, the current VR systems concentrate on \"giving\" information to the user, yet ignore \"receiving\" emotional status from the user, while this information definitely contributes to the media content rating and the user experience. On the other hand, traditional controllers become difficult to use due to the obscured view point. Hand and head gesture based control is an option [Cruz-Neira et al. 1993]. However, certain sensor devices need to be worn to assure control accuracy and users are easy to feel tired. Although face tracking achieves accurate result in both 2D and 3D scenarios, the current state-of-the-art systems cannot work when half of the face is occluded by the VR headset because the shape model is trained by data from the whole face.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM SIGGRAPH 2015 Posters","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2787626.2787635","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
In recent years, Virtual Reality (VR) has become a new media to provide users an immersive experience. Events happening in the VR connect closer to our emotions as compared to other interfaces. The emotion variations are reflected as our facial expressions. However, the current VR systems concentrate on "giving" information to the user, yet ignore "receiving" emotional status from the user, while this information definitely contributes to the media content rating and the user experience. On the other hand, traditional controllers become difficult to use due to the obscured view point. Hand and head gesture based control is an option [Cruz-Neira et al. 1993]. However, certain sensor devices need to be worn to assure control accuracy and users are easy to feel tired. Although face tracking achieves accurate result in both 2D and 3D scenarios, the current state-of-the-art systems cannot work when half of the face is occluded by the VR headset because the shape model is trained by data from the whole face.
近年来,虚拟现实(VR)已经成为一种为用户提供沉浸式体验的新媒体。与其他界面相比,VR中发生的事件更贴近我们的情感。情绪的变化反映在我们的面部表情上。然而,目前的VR系统专注于向用户“提供”信息,而忽略了从用户那里“接收”情感状态,而这些信息无疑有助于媒体内容评级和用户体验。另一方面,传统的控制器由于视点模糊而变得难以使用。基于手部和头部手势的控制也是一种选择[Cruz-Neira et al. 1993]。然而,某些传感器设备需要佩戴以保证控制精度,并且用户容易感到疲劳。虽然面部跟踪在2D和3D场景下都能获得准确的结果,但目前最先进的系统在VR头显遮挡一半面部时无法工作,因为形状模型是由整个面部的数据训练的。