{"title":"CleanLeaf Table: Preventing the Spread of COVID-19 through Smart Surfaces","authors":"Ashley Colley, Willehardt Gröhn, Jonna Häkkilä","doi":"10.1145/3490632.3497872","DOIUrl":"https://doi.org/10.1145/3490632.3497872","url":null,"abstract":"One mechanism for the spread of the COVID-19 virus is through contaminated surfaces, e.g. tables in cafes, trains or public libraries. This mechanism may be prevented by cleaning the table surface between each use. In practice, this can be optimized by directing arriving users to clean tables and highlighting to staff which surfaces require cleaning. One approach to achieve this is through making the table surface itself smart and indicate its clean or dirty status. In the CleanLeaf table we demonstrate two approaches to integrating indication in a table surface, LEDs under a thin wood veneer and electrochromic displays. Rather than explicit light emitting display based signage, the design demonstrates calm computing with the display forming an integral part of the surface design.","PeriodicalId":158762,"journal":{"name":"Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116713562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cristian Pamparău, Radu-Daniel Vatavu, Andrei R. Costea, Răzvan Jurchis, A. Opre
{"title":"XR4ISL: Enabling Psychology Experiments in Extended Reality for Studying the Phenomenon of Implicit Social Learning","authors":"Cristian Pamparău, Radu-Daniel Vatavu, Andrei R. Costea, Răzvan Jurchis, A. Opre","doi":"10.1145/3490632.3497830","DOIUrl":"https://doi.org/10.1145/3490632.3497830","url":null,"abstract":"We present XR4ISL, an XR system designed to support psychology experiments examining Implicit Social Learning, a fundamental phenomenon that guides human behavior, cognition, and emotion. We discuss XR4ISL with reference to MR4ISL, a previous system designed for Mixed Reality only, and reflect on differences between Mixed and Virtual Reality for psychology experiments.","PeriodicalId":158762,"journal":{"name":"Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122993559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kanyu Chen, Jiawen Han, G. Chernyshov, Christopher Changmok Kim, Ismael Rasa, K. Kunze
{"title":"Affective Umbrella – Towards a Novel Sensor Integrated Multimedia Platform Using Electrodermal and Heart Activity in an Umbrella Handle","authors":"Kanyu Chen, Jiawen Han, G. Chernyshov, Christopher Changmok Kim, Ismael Rasa, K. Kunze","doi":"10.1145/3490632.3497835","DOIUrl":"https://doi.org/10.1145/3490632.3497835","url":null,"abstract":"We present our first steps towards an umbrella-based novel multimedia platform using physiological data as an integrated feedback loop. In this paper, we demonstrate the viability of using an umbrella handle as a form factor to measure electrodermal activity(EDA) and heart rate(HR) in real-time. We compared the performance of the device with that of a more conventional finger sensor placement. Although the finger sensor placement is more widespread and considered to be more reliable, yet we are able to derive meaningful data from the umbrella handle in both stationary and dynamic contexts in the presented feasibility study.","PeriodicalId":158762,"journal":{"name":"Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114273436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Studying Natural User Interfaces for Smart Video Annotation towards Ubiquitous Environments","authors":"R. Rodrigues, R. Madeira, Nuno Correia","doi":"10.1145/3490632.3490672","DOIUrl":"https://doi.org/10.1145/3490632.3490672","url":null,"abstract":"Creativity and inspiration for problem-solving are critical skills in a group-based learning environment. Communication procedures have seen continuous adjustments over the years, with increased multimedia elements usage like videos to provide superior audience impact. Annotations are a valuable approach for remembering, reflecting, reasoning, and sharing thoughts on the learning process. However, it is hard to control playback flow and add potential notes during video presentations, such as in a classroom context. Teachers often need to move around the classroom to interact with the students, which leads to situations where they are physically far from the computer. Therefore, we developed a multimodal web video annotation tool that combines a voice interaction module with manual annotation capabilities for more intelligent natural interactions towards ubiquitous environments. We observed current video annotation practices and created a new set of principles to guide our research work. Natural language enables users to express their intended actions while interacting with the web video player for annotation purposes. We have developed a customized set of natural language expressions that map the user speech to specific software operations through studying and integrating new artificial intelligence techniques. Finally, the paper presents positive results gathered from a user study conducted to evaluate our solution.","PeriodicalId":158762,"journal":{"name":"Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114575302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiawen Han, Chi-Lan Yang, G. Chernyshov, Zhuoqi Fu, Reiya Horii, Takuji Narumi, K. Kunze
{"title":"Exploring Collective Physiology Sharing as Social Cues to Support Engagement in Online Learning","authors":"Jiawen Han, Chi-Lan Yang, G. Chernyshov, Zhuoqi Fu, Reiya Horii, Takuji Narumi, K. Kunze","doi":"10.1145/3490632.3497827","DOIUrl":"https://doi.org/10.1145/3490632.3497827","url":null,"abstract":"Insufficient social cues between distributed learners in online learning could result in lack of engagement and social bonds. With the development of wearable sensing, sharing physiological data can be used to enhance mutual understanding and connectedness among sharers. Our work aims to explore the potential of sharing heart rate (HR) and heart rate variability (HRV) collected from distributed learners to enhance their online learning experiences. We implemented a physiological streaming system and conducted a field study with 11 learners in online classes. This paper describes the study and discusses our interview findings by contrasting the influence of visualized collective physiological data from viewpoints of data contributors and viewers. Our exploratory results suggest streaming collective HR and HRV from multiple distributed learners could be used in online classes to improve engagement and sense of community.","PeriodicalId":158762,"journal":{"name":"Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia","volume":"21 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128158104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Design of a Smartphone VR Viewer Inspired by 19th Century Stereoscopes","authors":"Daniel Taipina, Jorge C. S. Cardoso","doi":"10.1145/3490632.3497870","DOIUrl":"https://doi.org/10.1145/3490632.3497870","url":null,"abstract":"Stereoscopic photography was one of the main forms of visual communication in the second half of the 19th century, leaving even today an important impact on our visual culture. In this work, we have re-imagined the classical stereoscope in order to take advantage of smartphone-VR technological capabilities, while still maintaining a viewing experience close to the original. This pictorial describes the design process, functionality, and evaluation of the Spectare device for experiencing stereoscopic photographs. Spectare has been used for experiencing Cultural Heritage content related to the virtual reconstruction of the Monastery of Santa Cruz, Coimbra, Portugal, as it was in 1834.","PeriodicalId":158762,"journal":{"name":"Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115886456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ana Rita Castro Freitas, Alexander Schülke, Simon Glaser, Pitt Michelmann, Thanh Nguyen Chi, Lisa Marie Schröder, Z. Fadavi, Gaurav Talekar, Jette Ternieten, Akash Trivedi, Jana Wahls, Warda Masood, C. Heinicke, Johannes Schöning
{"title":"Conversational User Interfaces to support Astronauts in Extraterrestrial Habitats","authors":"Ana Rita Castro Freitas, Alexander Schülke, Simon Glaser, Pitt Michelmann, Thanh Nguyen Chi, Lisa Marie Schröder, Z. Fadavi, Gaurav Talekar, Jette Ternieten, Akash Trivedi, Jana Wahls, Warda Masood, C. Heinicke, Johannes Schöning","doi":"10.1145/3490632.3490673","DOIUrl":"https://doi.org/10.1145/3490632.3490673","url":null,"abstract":"Long-term space missions are challenging and demanding for astronauts. Confined spaces and long-duration sensory deprivation may cause psychological problems for the astronauts. In this paper, we envision how extraterrestrial habitats (e.g., a habitat on the Moon or Mars) can maintain the well-being of the crews by augmenting the astronauts. In particular, we report on the design, implementation, and evaluation of conversational user interfaces (CUIs) for extraterrestrial habitats. The goal of such CUIs is to support scientists during their daily and scientific routines on their missions within the extraterrestrial habitat and provide emotional support. During a week-long so-called analog mission with four scientists using a Wizard of Oz prototype, we derived design guidelines for such CUIs. Successively, based on the derived guidelines, we present the implementation and evaluation of two CUIs named CASSIOPEIA and PEGASUS.","PeriodicalId":158762,"journal":{"name":"Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132122135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Niloofar Samimi, Simon von der Au, Florian Weidner, W. Broll
{"title":"AR in TV: Design and Evaluation of Mid-Air Gestures for Moderators to Control Augmented Reality Applications in TV","authors":"Niloofar Samimi, Simon von der Au, Florian Weidner, W. Broll","doi":"10.1145/3490632.3490668","DOIUrl":"https://doi.org/10.1145/3490632.3490668","url":null,"abstract":"Recent developments in augmented reality for TV productions encouraged broadcasters to enhance interaction with virtual content for moderators. However, traditional interaction methods are considered distracting and not intuitive. To overcome these issues, we performed a gesture elicitation study with a follow-up evaluation. For this, we considered TV moderators as primary users of the gestures as well as viewers as recipients. The elicited gesture set consists of five gestures for two types of camera shots (long shot and close shot). Findings of the evaluation study indicate that the derived set of gestures requires low physical and concentration effort from moderators. Also, both moderators and viewers found them appropriate to be used in TV with respect to understandability, distraction, likeability, and appropriateness. Using these gestures would allow moderators to control AR content in TV and tell stories in a modern and more expressive way.","PeriodicalId":158762,"journal":{"name":"Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131728370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kirill Ragozin, Xiaru Meng, R. Peiris, Katrin Wolf, G. Chernyshov, K. Kunze
{"title":"ThermoQuest - A Wearable Head Mounted Display to Augment Realities with Thermal Feedback","authors":"Kirill Ragozin, Xiaru Meng, R. Peiris, Katrin Wolf, G. Chernyshov, K. Kunze","doi":"10.1145/3490632.3490649","DOIUrl":"https://doi.org/10.1145/3490632.3490649","url":null,"abstract":"We present ThermoQuest, a self-contained wearable head-mounted display system for enhancing Virtual Reality experiences with temperature feedback. It’s constructed with commodity hardware elements, featuring 6 Peltier elements on the rim of the headset touching the users face. We explain the design and implementation of an affordable wearable thermal VR prototype build with commodity hardware. In a user study with 15 participants, we show evidence for a significant difference in reported presence in VR between thermal VR and control conditions (p > 0.017) in a counterbalanced experimental setup. We end with a discussion and use cases of the presented VR prototype and similar systems.","PeriodicalId":158762,"journal":{"name":"Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia","volume":"42 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116392020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"User-Elicited Gestural Interaction With Future In-Car Holographic 3D Displays","authors":"M. Kazhura","doi":"10.1145/3490632.3497832","DOIUrl":"https://doi.org/10.1145/3490632.3497832","url":null,"abstract":"Holographic 3D displays (H3D) have the potential to expand the interaction space for in-car infotainment systems by providing a larger depth range than other state of the art 3D display technologies. This work explored how non-expert users would interact with non-driving related tasks tailored to H3D visualization. In a gesture-elicitation study, N = 20 participants proposed mid-air gestures for a set of 33 tasks (referents) displayed either within or outside of participants’ reach. In a follow-up reverse-matching study with N = 21 participants, the resulting set of most mentioned gestures was refined. The final gesture set shows that techniques elicited for other 3D technologies are applicable to interaction with future in-car H3D displays.","PeriodicalId":158762,"journal":{"name":"Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123650359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}