{"title":"Visualization of software components and dependency graphs in virtual reality","authors":"Lisa Nafeie, A. Schreiber","doi":"10.1145/3281505.3281602","DOIUrl":"https://doi.org/10.1145/3281505.3281602","url":null,"abstract":"We present the visualization of component-based software architectures in Virtual Reality (VR) to understand complex software systems. We describe how to get all relevant data for the visualization by data mining on the whole source tree and on source code level. The data is stored in a graph database for further analysis and visualization. The software visualization uses an island metaphor. Storing the data in a graph database allows to easily query for different aspects of the software architecture.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122026991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Virtual gaze: exploring use of gaze as rich interaction method with virtual agent in interactive virtual reality content","authors":"Stevanus Kevin, Yun Suen Pai, K. Kunze","doi":"10.1145/3281505.3281587","DOIUrl":"https://doi.org/10.1145/3281505.3281587","url":null,"abstract":"Nonverbal cues, especially eye gaze, plays an important role in our daily communication, not just as an indicator of interest, but also as a method to convey information to another party. In this work, we propose a simulation of human eye gaze in Virtual Reality content to improve immersion of interaction between user and virtual agent. We developed an eye-tracking integrated interactive narrative content with a focus on player's interaction with gaze aware virtual agent, which is capable of reacting towards the player's gaze to simulate real human-to-human communication in VR environment and conducted an initial study to measure user's reaction.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122376514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maarten Wijnants, Hendrik Lievens, Nick Michiels, J. Put, P. Quax, W. Lamotte
{"title":"Standards-compliant HTTP adaptive streaming of static light fields","authors":"Maarten Wijnants, Hendrik Lievens, Nick Michiels, J. Put, P. Quax, W. Lamotte","doi":"10.1145/3281505.3281539","DOIUrl":"https://doi.org/10.1145/3281505.3281539","url":null,"abstract":"Static light fields are an effective technology to precisely visualize complex inanimate objects or scenes, synthetic and real-world alike, in Augmented, Mixed and Virtual Reality contexts. Such light fields are commonly sampled as a collection of 2D images. This sampling methodology inevitably gives rise to large data volumes, which in turn hampers real-time light field streaming over best effort networks, particularly the Internet. This paper advocates the packaging of the source images of a static light field as a segmented video sequence so that the light field can then be interactively network streamed in a quality-variant fashion using MPEG-DASH, the standardized HTTP Adaptive Streaming scheme adopted by leading video streaming services like YouTube and Netflix. We explain how we appropriate MPEG-DASH for the purpose of adaptive static light field streaming and present experimental results that prove the feasibility of our approach, not only from a networking but also a rendering perspective. In particular, real-time rendering performance is achieved by leveraging video decoding hardware included in contemporary consumer-grade GPUs. Important trade-offs are investigated and reported on that impact performance, both network-wise (e.g., applied sequencing order and segmentation scheme for the source images of the static light field) and rendering-wise (e.g., disk-versus-GPU caching of source images). By adopting a standardized transmission scheme and by exclusively relying on commodity graphics hardware, the net result of our work is an interoperable and broadly deployable network streaming solution for static light fields.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124206167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"In-pulse: inducing fear and pain in virtual experiences","authors":"Michinari Kono, Takashi Miyaki, J. Rekimoto","doi":"10.1145/3281505.3281506","DOIUrl":"https://doi.org/10.1145/3281505.3281506","url":null,"abstract":"Researchers have attempted to increase the realism of virtual reality (VR) applications in many ways. Combinations of the visual, auditory and haptic feedback have successfully simulated experiences in VR, however, multimedia contents may also stimulate emotions. In this paper, we especially paid attention to negative emotions that may be perceived in such experiences (e.g., fear). We hypothesized that volunteering, visual, mechanical, and electrical feedback may induce negative emotional feedback to users. In-Pulse is a novel system and approach to explore the potential of bringing this emotional feedback to users. We designed a head-mounted display (HMD) combined with mechanical and electrical muscle stimulation (EMS) actuators. A user study was performed to explore the effect of our approaches with combinations with VR contents. The results suggest that mechanical actuators and EMS can improve the experience of virtual experiences.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130495373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kazuki Shimozawa, Masakazu Nakazawa, H. Koike, R. Miyanaga, N. Hosoe
{"title":"Image compensation and stabilization for immersive 360-degree videos from capsule endoscopy","authors":"Kazuki Shimozawa, Masakazu Nakazawa, H. Koike, R. Miyanaga, N. Hosoe","doi":"10.1145/3281505.3281599","DOIUrl":"https://doi.org/10.1145/3281505.3281599","url":null,"abstract":"This paper describes image processing that can be used to develop immersive 360-degree videos using capsule endoscopy procedures. When viewed through a head-mounted display (HMD), doctors are able to inspect the human gastrointestinal tract as if they were inside the patient's body. Although the endoscopy capsule has two tiny fisheye cameras, the images captured by these cameras cannot be converted to equirectangular images which is the basic format used to produce 360-degree videos. This study proposes a method to generate a pseudo-omnidirectional video from the original images and stabilizes the video to prevent virtual reality (VR) sickness.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117102452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Co-presence and proxemics in shared walkable virtual environments with mixed colocation","authors":"Iana Podkosova, H. Kaufmann","doi":"10.1145/3281505.3281523","DOIUrl":"https://doi.org/10.1145/3281505.3281523","url":null,"abstract":"The purpose of the experiment presented in this paper is to investigate co-presence and locomotory patterns in a walkable shared virtual environment. In particular, trajectories of users that use a walkable tracking space alone are compared to those of users who use the tracking space in pairs. Co-presence, in a sense of perception of another person being present in the same virtual space is analyzed through subjective responses and behavioral markers. The results indicate that both perception and proxemics in relation to co-located and distributed players differ. The effect on the perception is however mitigated if participants do not collide with the avatars of distributed co-players.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127549684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gaze navigation in the real world by changing visual appearance of objects using projector-camera system","authors":"J. Miyamoto, H. Koike, Toshiyuki Amano","doi":"10.1145/3281505.3281537","DOIUrl":"https://doi.org/10.1145/3281505.3281537","url":null,"abstract":"This paper proposes a method for gaze navigation in the real world by projecting an image onto a real object and changing its appearance. In the proposed method, a camera captures an image of objects in the real world. Next all the pixels in the image but those in a specified region are slightly shifted to left and right. Then the obtained image is projected onto the original objects. As a result, the objects not in the specified region looks blurred. We conducted user experiments and showed that the users' gaze were navigated to the specified region.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122928916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tomi Nukarinen, J. Kangas, Jussi Rantala, Toni Pakkanen, R. Raisamo
{"title":"Hands-free vibrotactile feedback for object selection tasks in virtual reality","authors":"Tomi Nukarinen, J. Kangas, Jussi Rantala, Toni Pakkanen, R. Raisamo","doi":"10.1145/3281505.3283375","DOIUrl":"https://doi.org/10.1145/3281505.3283375","url":null,"abstract":"Interactions between humans and virtual environments rely on timely and consistent sensory feedback, including haptic feedback. However, many questions remain open concerning the spatial location of haptics on the user's body in VR. We studied how simple vibrotactile collision feedback on two less studied locations, the temples, and the wrist, affects an object picking task in a VR environment. We compared visual feedback to three visual-haptic conditions, providing haptic feedback on the participants' (N=16) wrists, temples or simultaneously on both locations. The results indicate that for continuous, hand-based object selection, the wrist is a more promising feedback location than the temples. Further, even a suboptimal feedback location may be better than no haptic collision feedback at all.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124499739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Virtual reality environment to support activity in the real world: a case of working environment using microscope","authors":"T. Takashina, Yuji Kokumai","doi":"10.1145/3281505.3281565","DOIUrl":"https://doi.org/10.1145/3281505.3281565","url":null,"abstract":"This manuscript introduces a virtual reality (VR) environment to support research activity in the real world. We constructed a prototype to support intellectual activity in the field of life sciences using VR. In the prototype, the users can operate a real microscope from a virtual space, along with other useful equipment such as huge displays, and analyze images carefully and intuitively using a immersive visualizer seamlessly integrated in the environment. We belive that our prototype is promising for expanding the potential of VR applications.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121607468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tanvir Irfan Chowdhury, Sharif Mohammad Shahnewaz Ferdous, Tabitha C. Peck, J. Quarles
{"title":"\"Virtual ability simulation\" to boost rehabilitation exercise performance and confidence for people with disability","authors":"Tanvir Irfan Chowdhury, Sharif Mohammad Shahnewaz Ferdous, Tabitha C. Peck, J. Quarles","doi":"10.1145/3281505.3283386","DOIUrl":"https://doi.org/10.1145/3281505.3283386","url":null,"abstract":"The purpose of this paper is to investigate a concept called virtual ability simulation (VAS) for people with disability in a virtual reality (VR) environment. In a VAS people with disabilities perform tasks that are made easier in the virtual environment (VE) compared to the real world. We hypothesized that putting people with disabilities in a VAS will increase confidence and enable more efficient task completion than without a VAS. To investigate this hypothesis, we conducted a within-subjects experiment in which participants performed a virtual task called \"kick the ball\" in two different conditions: a no gain condition (i.e., same difficulty as in the real world) and a rotational gain condition (i.e., physically easier than the real world but visually the same). The results from our study suggest that VAS increased participants' confidence which in turn enables them to perceive the difficulty of the same task easier.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114605166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}