{"title":"Dealing with Clutter in Augmented Museum Environments","authors":"Wanqi Zhao, D. Stevenson, H. Gardner, Matt Adcock","doi":"10.1145/3359997.3365683","DOIUrl":"https://doi.org/10.1145/3359997.3365683","url":null,"abstract":"Augmented Reality (AR) can be used in museum and exhibition spaces to extend the available information space. However, AR scenes in such settings can become cluttered when exhibits are displayed close to one another. To investigate this problem, we have implemented and evaluated four AR headset interaction techniques for the Microsoft HoloLens that are based on the idea of Focus+Context (F+C) visualisation [Kalkofen et al. 2007]. These four techniques were made up of all combinations of interaction and response dimensions where the interaction was triggered by either “walk” (approaching an exhibit) or “gaze” (scanning/looking at an exhibit) and the AR holograms responded dynamically in either a “scale” or “frame” representation. We measured the efficiency and accuracy of these four techniques in a user study that examined their performance in an abstracted exhibition setting when undertaking two different tasks (“seeking” and “counting”). The results of this study indicated that the “scale” representation was more effective at reducing clutter than the “frame” representation, and that there was a user preference for the “gaze-scale” technique.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133406691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic Occlusion Handling for Real-Time AR Applications","authors":"Joaquim Jorge, R. K. D. Anjos, Ricardo Silva","doi":"10.1145/3359997.3365700","DOIUrl":"https://doi.org/10.1145/3359997.3365700","url":null,"abstract":"Augmented reality (AR) allows computer generated graphics to be overlaid in images or video captured by a camera in real time. This technology is often used to enhance perception by providing extra information or simply by enriching the experience of the user. AR offers a significant potential in many applications such as industrial, medical, education and entertainment. However, for AR to achieve the maximum potential and become fully accepted, the real and virtual objects within the user’s environment must become seamlessly integrated. Three main types of problems arise when we try to achieve this effect: illumination issues, tracking difficulties and occlusion troubles. In this work we present an algorithm to handle AR occlusions in real time. Our approach uses raw depth information of the scene to realize a rough foreground / background segmentation. We use this information, as well as details from color data to estimate a blending coefficient and combine the virtual objects with the real objects into a single image. After experimenting with different scenes we show that our approach is able to produce consistent and aesthetically pleasing occlusions between virtual and real objects, with a low computational cost. Furthermore, we explore different alternatives to improving the quality of the final results while overcoming limitations of previous methods.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116549707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Artistic Approach to Virtual Reality","authors":"B. Kelley, C. Tornatzky","doi":"10.1145/3359997.3365701","DOIUrl":"https://doi.org/10.1145/3359997.3365701","url":null,"abstract":"Virtual Reality technologies have been challenging the way in which humans interact with computers since its implementation. When viewed through an artistic lens these interactions reveal a shift in the roles that content creators and the end user fulfill. VR technologies inherently demand that the user participates in the creation of the content while incorporated into the experience. This realization has dramatic implications for media and artistic works, as the traditional role of the content creator has been to dictate and frame the view in which the user interacts with the content, but with VR works much of the creators role has been stripped away and transferred to the viewer. This breaking of the traditional roles, accompanied by the transition away from “the rectangle,” or the flat rectangular plane which acts as a “canvas” for media and artistic works, requires a new approach to VR works.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123095127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Multi-User 360-Video Streaming System for Wireless Network","authors":"Haowei Cao, Jialiang Lu, Nan Zong","doi":"10.1145/3359997.3365713","DOIUrl":"https://doi.org/10.1145/3359997.3365713","url":null,"abstract":"With the rapid development of Virtual Reality technology and its hardware, 360-degree video is becoming a new form of media which arouses the interest of public. In the past few years, many 360-degree video delivery schemes are proposed, but there hasn’t been a standard solution which can perfectly overcome the difficulties caused by network latency and bandwidth limit. In this paper, we consider the context of a wireless network consisting of a base station and several users. We proposed a 360-degree video delivery and streaming scheme which serves multiple users simultaneously while optimizing the global bandwidth consumption. The system will predict the head movement of users using machine learning algorithm, and extract the visible portion of the video frame for transmission. The core contribution of the scheme is that it will recognize the conjoint viewport of multiple users, and then optimize the global bandwidth consumption by arranging the transmission of conjoint viewport over the public channel of the wireless network. The results prove that the proposed scheme can effectively reduce global bandwidth consumption of the network with relatively simple configuration.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131528480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LINACVR: VR Simulation for Radiation Therapy Education","authors":"C. Anslow","doi":"10.1145/3359997.3365692","DOIUrl":"https://doi.org/10.1145/3359997.3365692","url":null,"abstract":"","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121553377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Virtual Avatars as a tool for audience engagement","authors":"M. Zelenskaya, L. Harvey","doi":"10.1145/3359997.3365717","DOIUrl":"https://doi.org/10.1145/3359997.3365717","url":null,"abstract":"Modern motion capture tools can be used to animate sophisticated digital characters in real time. Through these virtual avatars human performers can communicate with live audience, creating a promising new area of application for public engagement. This study describes a social experiment where a real-time multimedia setup was used to facilitate an interaction between a digital character and visitors at a public venue. The technical implementation featured some innovative elements, such as using iPhone TrueDepth Camera as part of the performance capture pipeline. The study examined public reactions during the experiment in order to explore the empathic potential of virtual avatars and assess their ability to engage live audience.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122516223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"This Land AR: an Australian Music and Sound XR Installation","authors":"P. Matthias, M. Billinghurst, Zi Siang See","doi":"10.1145/3359997.3365740","DOIUrl":"https://doi.org/10.1145/3359997.3365740","url":null,"abstract":"This demonstration presents a development of an Augmented Reality (AR) Indigenous music and sound installation, an extended reality (XR) interactive audible experiential approach for augmenting audible elements in a public exhibition setting. It is a transmedia initiative as part of a music project, This Land. The project connected contributors and musicians, involving traditional to contemporary vocal and instrumental sounds. This Land project embraces cultural and social perspectives and related contemporary discourses within the Australia context. As augmented reality was being explored as an on-going study for the project, a number of conventional printed wall design (posters and photograph exhibits) were enhanced with augmented musical and sound elements. This Land project commenced as artistic performative event built around many years of collaboration between staff and students from the School of Creative Industries and the Wollotuka Institute at University of Newcastle (UON). Its vision embraces issues of Indigenisation, decolonisation, reciprocity and language revitalisation. A portable version of This Land AR will be used for the demonstration where users could experience features of the prototype system in the public setting.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129431459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Containerisation as a method for supporting multiple VR visualisation platforms from a single data source","authors":"T. Wyeld, Haifeng Shen, T. Bednarz","doi":"10.1145/3359997.3365732","DOIUrl":"https://doi.org/10.1145/3359997.3365732","url":null,"abstract":"This paper discusses a proof-of-concept context-aware container server for exposing multiple VR devices to a single data source. The data source was a real-time streamed reconstruction of a combat simulation generated in NetLogo. The devices included a mobile, tablet, PC, data wall, HMD and dataglove interaction. Each device had its specific requirements and user restrictions. Initial testing of this system suggests it is an efficient method for supporting diverse user needs whilst maintaining data integrity and synchronicity. The overall server architecture is discussed as well as future directions for this research.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116027182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Lopes, Filipe Relvas, S. Paulo, Y. Rekik, L. Grisoni, J. Jorge
{"title":"FEETICHE: FEET Input for Contactless Hand gEsture Interaction","authors":"D. Lopes, Filipe Relvas, S. Paulo, Y. Rekik, L. Grisoni, J. Jorge","doi":"10.1145/3359997.3365704","DOIUrl":"https://doi.org/10.1145/3359997.3365704","url":null,"abstract":"Foot input has been proposed to support hand gestures in many interactive contexts, however, little attention has been given contactless 3D object manipulation. This is important since many applications, namely sterile surgical theaters require contactless operation. However, relying solely on hand gestures makes it difficult to specify precise interactions since hand movements are difficult to segment into command and interaction modes. The unfortunate results range from unintended activations, to noisy interactions and misrecognized commands. In this paper, we present FEETICHE a novel set of multi-modal interactions combining hand and foot input for supporting contactless 3D manipulation tasks, while standing in front of large displays driven by foot tapping and heel rotation. We use depth sensing cameras to capture both hand and feet gestures, and developed a simple yet robust motion capture method to track dominant foot input. Through two experiments, we assess how well foot gestures support mode switching and how this frees the hands to perform accurate manipulation tasks. Results indicate that users effectively rely on foot gestures to improve mode switching and reveal improved accuracy on both rotation and translation tasks.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130777164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LUI: A multimodal, intelligent interface for large displays","authors":"V. Parthiban, Ashley Jieun Lee","doi":"10.1145/3359997.3365743","DOIUrl":"https://doi.org/10.1145/3359997.3365743","url":null,"abstract":"On large screen displays, using conventional keyboard and mouse input is difficult because small mouse movements often do not scale well with the size of the display and individual elements on screen. We propose LUI, or Large User Interface, which increases the range of dynamic surface area of interactions possible on such a display. Our model leverages real-time continuous feedback of free-handed gestures and voice to control extensible applications such as photos, videos, and 3D models. Utilizing a single stereo-camera and voice assistant, LUI does not require exhaustive calibration or a multitude of sensors to operate, and it can be easily installed and deployed on any large screen surfaces. In a user study, participants found LUI efficient and easily learnable with minimal instruction, and preferred it to more conventional interfaces. This multimodal interface can also be deployed in augmented or virtual reality spaces and autonomous vehicle displays.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131782756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}