{"title":"Merging environments for shared spaces in mixed reality","authors":"Ben J. Congdon, Tuanfeng Y. Wang, A. Steed","doi":"10.1145/3281505.3281544","DOIUrl":"https://doi.org/10.1145/3281505.3281544","url":null,"abstract":"In virtual reality a real walking interface limits the extent of a virtual environment to our local walkable space. As local spaces are specific to each user, sharing a virtual environment with others for collaborative work or games becomes complicated. It is not clear which user's walkable space to prefer, or whether that space will be navigable for both users. This paper presents a technique which allows users to interact in virtual reality while each has a different walkable space. With this method mappings are created between pairs of environments. Remote users are then placed in the local environment as determined by the corresponding mapping. A user study was conducted with 38 participants. Pairs of participants were invited to collaborate on a virtual reality puzzle-solving task while in two different virtual rooms. An avatar representing the remote user was mapped into the local user's space. The results suggest that collaborative systems can be based on local representations that are actually quite different.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125455216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning-based word segmentation for reliable text document retrieval and augmentation","authors":"Jean-Pierre Lomaliza, Hanhoon Park","doi":"10.1145/3281505.3281585","DOIUrl":"https://doi.org/10.1145/3281505.3281585","url":null,"abstract":"Imagine that one may have access to a part of a text document, say a page, and from that would want to identify the document to which it belongs. In such cases, there is a need to perform a content-based document retrieval in a large database.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126892343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Preliminary study on angular properties of spatial awareness of human in virtual space","authors":"Y. Ishihara, M. Ishihara","doi":"10.1145/3281505.3281623","DOIUrl":"https://doi.org/10.1145/3281505.3281623","url":null,"abstract":"This manuscript describes an investigation into human's spatial awareness in a virtual space. In the experiment, the subject is asked to see a short video clip of moving through the curved passage, and then fill a questionnaire about how much the passage is curved. As a result, it was found that people would recognize a curved path in virtual space as a smaller degree curved one.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124869967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ramen spoon eraser: CNN-based photo transformation for improving attractiveness of ramen photos","authors":"Daichi Horita, Jaehyeong Cho, Takumi Ege, Keiji Yanai","doi":"10.1145/3281505.3281622","DOIUrl":"https://doi.org/10.1145/3281505.3281622","url":null,"abstract":"In recent years, a large number of food photos are being posted globally on SNS. To obtain many views or \"likes\", attractive photos should be posted. However, some casual foods are served with utensils on a plate or a bowl at restaurants, which spoils attractiveness of meal photos. Especially in Japan where ramen noodle is the most popular casual food, ramen is usually served with a ramen spoon in a ramen bowl in a ramen noodle shop. This is a big problem for SNS photographers, because a ramen spoon soaked in a ramen bowl extremely degrades the appearance of ramen photos. Then, in this paper, we propose anapplication called \"ramen spoon eraser\" that erases a spoon from ramen photos with spoons using a CNN-based Image-to-Image translation network. In this application, it is possible to automatically erase ramen spoons from ramen photos, which extremely improve the attractiveness of ramen photos. In the experiment, we train models in two ways as CNN-based Image-to-Image translation networks with the dataset consisting of ramen images with / without spoons collected from the Web.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"179 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124253643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effect of accompanying onomatopoeia to interaction sound for altering user perception in virtual reality","authors":"Hyeonah Choi, Jiwon Oh, Minwook Chang, G. Kim","doi":"10.1145/3281505.3281614","DOIUrl":"https://doi.org/10.1145/3281505.3281614","url":null,"abstract":"Onomatopoeia refers to a word that phonetically imitates, resembles the sound, or depict an event at hand. In languages like Korean and Japanese, it is used in everyday conversation to emphasize certain situation and enrich the prose. In this poster, we explore if the use of onomatopoeia, visualized and added to the usual sound feedback, could be taken advantage to increase or alter the perceived realism of the sound feedback itself, and furthermore of the situation at hand in virtual reality. A pilot experiment was run to compare the user's subjective perceived realism and experience under four test conditions of presenting a simple physical interaction, accompanying it with: (1) just the \"as-is\" sound (baseline), (2) \"as-is\" sound and onomatopoeia, (3) a representative sound sample (e.g. one for all different collision conditions), and (4) a representative sound sample and onomatopoeia. Our pilot study has found that the use of onomatopoeia can alter and add on to the perceived realism/naturalness of the virtual situation such that the experiences of the single representative sound added with the onomatopoeia and \"as-is\" sound were deemed similar.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128379643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Prototyping impossible objects with VR","authors":"G. Hori","doi":"10.1145/3281505.3281625","DOIUrl":"https://doi.org/10.1145/3281505.3281625","url":null,"abstract":"Impossible objects are three-dimensional objects that give the impression that it is impossible for such objects to exist in the actual three-dimensional space when observed from a specific view point. The purpose of the present study is to develop a system for prototyping impossible objects with VR, which can be used for prototyping impossible objects as well as evaluating how sure the expected illusions occur when we observe the real objects with naked eyes before molding the impossible objects using 3D printers. We have implemented our prototyping system with Unity and C# programming language for use with Oculus Go. The advantage of employing VR in prototyping impossible objects is that we can take into account the scale effect when we evaluate how sure the expected illusions occur.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129973580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Congzhi Wang, Oana A. Dogaru, Patrick L. Strandholt, N. C. Nilsson, R. Nordahl, S. Serafin
{"title":"Step aside: an initial exploration of gestural input for lateral movement during walking-in-place locomotion","authors":"Congzhi Wang, Oana A. Dogaru, Patrick L. Strandholt, N. C. Nilsson, R. Nordahl, S. Serafin","doi":"10.1145/3281505.3281536","DOIUrl":"https://doi.org/10.1145/3281505.3281536","url":null,"abstract":"Walking-in-place (WIP) techniques provide users with a relatively natural way of walking in virtual reality. However, previous research has primarily focused on WIP during forward movement and tasks involving turning. Thus, little is known about what gestures to use in combination with WIP in order to enable sidestepping. This paper presents two user studies comparing three different types of gestures based on movement of the hip, leaning of the torso, and actual sidesteps. The first study focuses on purely lateral movement while the second involves both forward and lateral movement. The results of both studies suggest that leaning yielded significantly more natural walking experiences and this gesture also produced significantly less positional drift.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"257 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115009859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Danieau, P. Guillotel, Olivier Dumas, Thomas Lopez, Bertrand Leroy, N. Mollet
{"title":"HFX studio: haptic editor for full-body immersive experiences","authors":"F. Danieau, P. Guillotel, Olivier Dumas, Thomas Lopez, Bertrand Leroy, N. Mollet","doi":"10.1145/3281505.3281518","DOIUrl":"https://doi.org/10.1145/3281505.3281518","url":null,"abstract":"Current virtual reality systems enable users to explore virtual worlds, fully embodied in avatars. This new type of immersive experience requires specific authoring tools. The traditional ones used in the movie and the video games industries were modified to support immersive visual and audio content. However, few solutions exist to edit haptic content, especially when the whole user's body is involved. To tackle this issue we propose HFX Studio, a haptic editor based on haptic perceptual models. Three models of pressure, vibration and temperature were defined to allow the spatialization of haptic effects on the user's body. These effects can be designed directly on the body (egocentric approach), or specified as objects of the scene (allocentric approach). The perceptual models are also used to describe capabilities of haptic devices. This way the created content is generic, and haptic feedback is rendered on the available devices. The concept has been implemented with the Unity®game engine, a tool already used in VR production. A qualitative pilot user study was conducted to analyze the usability of our tool with expert users. Results shows that the edition of haptic feedback is intuitive for these users.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115113552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"BoatAR","authors":"Yishuo Liu, Yichuan Zhang, Shiliang Zuo, W. Fu","doi":"10.1145/3281505.3283392","DOIUrl":"https://doi.org/10.1145/3281505.3283392","url":null,"abstract":"Augmented Reality (AR) allows virtual object projection with an unblocked view of the physical world which provides reference and other people. The mixed scene provides an agile platform for communication and collaboration, especially on a product that would be difficult or expensive to present otherwise. In the boating industry, high customization leaves dealers with a high cost on inventory, financially and spatially. In this work, we present BoatAR, a multi-user AR boat configuration system designed for addressing these issues. A prototype system was implemented using HoloLens with shared experience, and demonstrated to a group of boat dealers and received positive feedback. BoatAR provided an example of how a multi-user AR system could help in the conventional industry.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114727985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kristoffer Waldow, Martin Mišiak, Ursula Derichs, Olaf Clausen, Arnulph Fuhrmann
{"title":"An evaluation of smartphone-based interaction in AR for constrained object manipulation","authors":"Kristoffer Waldow, Martin Mišiak, Ursula Derichs, Olaf Clausen, Arnulph Fuhrmann","doi":"10.1145/3281505.3281608","DOIUrl":"https://doi.org/10.1145/3281505.3281608","url":null,"abstract":"In Augmented Reality, interaction with the environment can be achieved with a number of different approaches. In current systems, the most common are hand and gesture inputs. However experimental applications also integrated smartphones as intuitive interaction devices and demonstrated great potential for different tasks. One particular task is constrained object manipulation, for which we conducted a user study. In it we compared standard gesture-based approaches with a touch-based interaction via smartphone. We found that a touch-based interface is significantly more efficient, although gestures are being subjectively more accepted. From these results we draw conclusions on how smartphones can be used to realize modern interfaces in AR.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114743118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}