{"title":"Research on the Emotions Expressed by the Posture of Kemo-mimi","authors":"Ryota Shijo, Sho Sakurai, K. Hirota, T. Nojima","doi":"10.1145/3562939.3565610","DOIUrl":"https://doi.org/10.1145/3562939.3565610","url":null,"abstract":"Kemo-mimi means the dog- or cat-like ears on a humanoid character, or the ears of the animal itself. Kemo-mimi is often used as an element of the avatar’s appearance. It is generally considered that the posture of animal ears represents the animal’s emotional state. And the idea has been used as a technique for expressing emotions in many cartoon and animation works. But despite this fact, there are few examples of studies on the emotions that can be expressed by animal ears. Therefore, we decided to investigate the relationship between the posture of the animal ears and emotions and to establish a method of expressing emotions using the ears. In the experiments, three-dimensional animations of animal ears changing posture were presented to the subjects, and they were asked to answer the emotion corresponding to the posture. The results showed that there was a certain degree of a common understanding of people’s impressions concerning the animal ears. In this paper, we report the emotions that can be expressed by the posture of the animal ears as revealed in this study.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133700882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generating Leg Animation for Walking-in-Place Techniques using a Kinect Sensor","authors":"Jingbo Zhao, Zhetao Wang, Yiqin Peng, Yaojun Wang","doi":"10.1145/3562939.3565679","DOIUrl":"https://doi.org/10.1145/3562939.3565679","url":null,"abstract":"We present a kinematic approach based on animation rigging to generating real-time leg animation. Our main approach is to track vertical in-place foot movements of a user using a Kinect v2 sensor and map tracked foot height to the motions of inverse kinematics (IK) targets. We align two IK targets with an avatar's feet and guide the virtual feet to perform cyclic walking motions using a set of kinematic equations. Preliminary testing shows that this approach can produce compelling real-time forward-backward leg animation during in-place walking.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115716702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Precueing Sequential Rotation Tasks in Augmented Reality","authors":"Jen-Shuo Liu, B. Tversky, Steven K. Feiner","doi":"10.1145/3562939.3565641","DOIUrl":"https://doi.org/10.1145/3562939.3565641","url":null,"abstract":"Augmented reality has been used to improve sequential-task performance by cueing information about a current task step and precueing information about future steps. Existing work has shown the benefits of precueing movement (translation) information. However, rotation is also a major component in many real-life tasks, such as turning knobs to adjust parameters on a console. We developed an AR testbed to investigate whether and how much precued rotation information can improve user performance. We consider two unimanual tasks: one requires a user to make sequential rotations of a single object, and the other requires the user to move their hand between multiple objects to rotate them in sequence. We conducted a user study to explore these two tasks using circular arrows to communicate rotation. In the single-object task, we examined the impact of number of precues and visualization style on user performance. Results show that precues improved performance and that arrows with highlighted heads and tails, with each destination aligned with the next origin, yielded the shortest completion time on average. In the multiple-object task, we explored whether rotation precues can be helpful in conjunction with movement precues. Here, using a rotation cue without rotation precues in conjunction with a movement cue and movement precues performed the best, implying that rotation precues were not helpful when movement was also required.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"76 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130605589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martin Kocur, J. Bogon, Manuel Mayer, Miriam Witte, Amelie Karber, N. Henze, V. Schwind
{"title":"Sweating Avatars Decrease Perceived Exertion and Increase Perceived Endurance while Cycling in Virtual Reality","authors":"Martin Kocur, J. Bogon, Manuel Mayer, Miriam Witte, Amelie Karber, N. Henze, V. Schwind","doi":"10.1145/3562939.3565628","DOIUrl":"https://doi.org/10.1145/3562939.3565628","url":null,"abstract":"Avatars are used to represent users in virtual reality (VR) and create embodied experiences. Previous work showed that avatars’ stereotypical appearance can affect users’ physical performance and perceived exertion while exercising in VR. Although sweating is a natural human response to physical effort, surprisingly little is known about the effects of sweating avatars on users. Therefore, we conducted a study with 24 participants to explore the effects of sweating avatars while cycling in VR. We found that visualizing sweat decreases the perceived exertion and increases perceived endurance. Thus, users feel less exerted while embodying sweating avatars. We conclude that sweating avatars contribute to more effective exergames and fitness applications.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131246538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Appling Artificial Intelligence Techniques on Singing Teaching of Taiwanese Opera","authors":"Shih-Chieh Lin, Chien-Hsing Chou, Ming-Feng Ke, Shu-Han Liao, Yen-Hung Lin, Chiu-Pin Kuo","doi":"10.1145/3562939.3565650","DOIUrl":"https://doi.org/10.1145/3562939.3565650","url":null,"abstract":"Taiwanese opera is the important culture inheritance in Taiwan, however, this culture inheritance is dying in recently years. Although the Taiwan government and various Taiwanese opera troupes have worked hard for many years to promote this culture to campuses, and held several interest courses; this culture inheritance is still losing and dying. For elder people, Taiwanese opera and Taiwanese cultures are both precious culture treasures and parts of their childhood memories. Nowadays, young people in Taiwan are no longer familiar to Taiwanese, neither to Taiwanese opera singings. It is hard for young people to learn how appreciating this traditional culture. In this study, we refer to the current promotion methods of drama troupes which learn the singing method and posture of Taiwanese opera, we combine artificial intelligence techniques into traditional Taiwanese opera on singing and posture. The proposed system could analyzes students’ voice and postures, and then assists teachers to improve the learning performance of students. Students could compare their singing skill or postures with professional actors and adjust their singing and posture. Students of Taiwanese opera interest class can practice independently without professional teacher's guidance at home. In campus promotion, this game-like promotion method brings young people more acceptance of Taiwanese opera.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126928146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Colorimetry Evaluation for Video Mapping Rendering","authors":"Eva Décorps, Christian Frisson, Emmanuel Durand","doi":"10.1145/3562939.3565684","DOIUrl":"https://doi.org/10.1145/3562939.3565684","url":null,"abstract":"Perceptually accurate colour reproduction is a core requirement of video mapping applications, where objective evaluation of colour rendering chain taking into account human perception becomes greatly beneficial. In this article, we present a workflow for colorimetry evaluation of video mapping software rendering chain, that we implemented in open-source video mapping software Splash, with a set a common metrics for image quality assessment and tools for colour reproduction evaluation. We introduce an accompanying graphical visualization template to help accurate interpretation of the metrics used. We describe different use case examples that we performed with our tool, proving the workflow efficient for simple, understandable and reproducible colorimetry evaluation.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"45 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134455739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CourseExpo: An Immersive Collaborative Learning Ecosystem","authors":"Connor Wilding Leonie, R. Angotti, Kelvin Sung","doi":"10.1145/3562939.3565646","DOIUrl":"https://doi.org/10.1145/3562939.3565646","url":null,"abstract":"Inspired by the need for remote learning technologies due to the Covid-19 pandemic and the isolated sense of lonely learners, we reimagined a remote classroom that fosters collaboration, builds community and yet without the constraints of the physical world. This paper presents a collaborative learning ecosystem that resembles a traditional city square where avatars of learners and facilitators wander, commingle, discover, and learn together. Buildings in the city square are learning modules which include typical knowledge units, assessment booths, or custom collaborative sketching studios. Our attempted prototype at realizing this conceptualization demonstrated initial success and we offer recommendations for future work.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117179380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TTTV2 (Transform the Taste and Visual Appearance):Tele-eat virtually with a seasoning home appliance that changes the taste and appearance of food or beverages","authors":"Homei Miyashita","doi":"10.1145/3562939.3565663","DOIUrl":"https://doi.org/10.1145/3562939.3565663","url":null,"abstract":"We prototyped a seasoning appliance that applies a “taste display” technology that employs a taste sensor to reproduce flavors via the spraying and subsequent mixing of colored, flavored liquids to create a printed image on the surface of another food. For example, when toasted bread is used as the medium, the appliance changes its appearance and taste into other food items, such as pizza or chocolate brownie, and the user can then virtually eat that food.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121453402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring User Behaviour in Asymmetric Collaborative Mixed Reality","authors":"Nels Numan, A. Steed","doi":"10.1145/3562939.3565630","DOIUrl":"https://doi.org/10.1145/3562939.3565630","url":null,"abstract":"A common issue for collaborative mixed reality is the asymmetry of interaction with the shared virtual environment. For example, an augmented reality (AR) user might use one type of head-mounted display (HMD) in a physical environment, while a virtual reality (VR) user might wear a different type of HMD and see a virtual model of that physical environment. To explore the effects of such asymmetric interfaces on collaboration we present a study that investigates the behaviour of dyads performing a word puzzle task where one uses AR and the other VR. We examined the collaborative process through questionnaires and behavioural measures based on positional and audio data. We identified relationships between presence and co-presence, accord and co-presence, leadership and talkativeness, head rotation velocity and leadership, and head rotation velocity and talkativeness. We did not find that AR or VR biased subjective responses, though there were interesting behavioural differences: AR users spoke more words, AR users had a higher median head rotation velocity, and VR users travelled further.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115336363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Community Game Development Toolkit","authors":"Amelia Roth, Daniel Lichtman","doi":"10.1145/3562939.3565661","DOIUrl":"https://doi.org/10.1145/3562939.3565661","url":null,"abstract":"The Community Game Development Toolkit is a set of tools that provide an accessible, intuitive work-flow within the Unity game engine for students, artists, researchers and community members to create their own visually rich, interactive 3D stories and immersive environments. The toolkit is designed to support diverse communities to represent their own traditions, rituals and heritages through interactive, visual storytelling, drawing on community members’ own visual assets such as photos, sketches and paintings, without requiring the use of coding or other specialized game-design skills. Projects can be built for desktop, mobile and VR applications. This paper describes the background, implementation and planned future developments of the toolkit, as well the contexts in which it has been used.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121526949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}