Marianna Obrist, P. Marti, Carlos Velasco, Yunwen Tu, Takuji Narumi, N. H. Møller
{"title":"The future of computing and food: extended abstract","authors":"Marianna Obrist, P. Marti, Carlos Velasco, Yunwen Tu, Takuji Narumi, N. H. Møller","doi":"10.1145/3206505.3206605","DOIUrl":"https://doi.org/10.1145/3206505.3206605","url":null,"abstract":"The excitement around computing technology in all aspects of life requires that we tackle fundamental issues of healthcare, leisure, labor, education, and food to create the society we want. The aim of this satellite event was to bring together a variety of different stakeholders, ranging from local food producers, chefs, designers, engineers, data scientists, and sensory scientists, to discuss the interwoven future of computing technology and food. This event was co-located with the AVI 2018 conference and supported by the ACM Future of Computing Academy (ACM-FCA). The event followed a co-creation approach that encourages conjoined creative and critical thinking that feeds into the formulation of a manifesto on the future of computing and food. We hope this will inspire future discussions on the transformative role of computing technology on food.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122301725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimating finger postures by attaching an omnidirectional camera to the center of a user's palm","authors":"Y. Maruyama, Y. Kono","doi":"10.1145/3206505.3206560","DOIUrl":"https://doi.org/10.1145/3206505.3206560","url":null,"abstract":"This research describes the development of a system that estimates the natural postures of a user's fingers from the images captured by an omnidirectional video camera attached to the center of the user's palm in real time. The finger postures can be estimated by detecting the fingertips on each image and referring to the following preset information: the positional relationship between the camera and the user's fingers/fingertips, the length between the finger joints, and the interdependencies between the finger joints.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129596132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"HCI and the educational technology revolution #HCIEd2018: a workshop on video-making for teaching and learning human-computer interaction","authors":"A. Wilde, A. Vasilchenko, A. Dix","doi":"10.1145/3206505.3206600","DOIUrl":"https://doi.org/10.1145/3206505.3206600","url":null,"abstract":"Over the years, the HCI Educators series has studied a number of challenges for the teaching and learning of Human-Computer Interaction at a time of radical educational change. Though video has historically played an important part on the teaching and development of HCI, only recently video-making and editing technologies have become accessible in an unprecedented way, allowing students to become proficient video \"prosumers\" (producers and consumers). Further, there are numerous educational gains to be had through these technologies. Through a very interactive workshop we explore how can video be used in practice to leverage skills and foster creativity whilst facilitating knowledge acquisition.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125874773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bongshin Lee, Arjun Srinivasan, J. Stasko, Melanie Tory, V. Setlur
{"title":"Multimodal interaction for data visualization","authors":"Bongshin Lee, Arjun Srinivasan, J. Stasko, Melanie Tory, V. Setlur","doi":"10.1145/3206505.3206602","DOIUrl":"https://doi.org/10.1145/3206505.3206602","url":null,"abstract":"Multimodal interaction offers many potential benefits for data visualization. It can help people stay in the flow of their visual analysis and presentation, with the strengths of one interaction modality offsetting the weaknesses of others. Furthermore, multimodal interaction offers strong promise for leveraging data visualization on diverse display hardware including mobile, AR/VR, and large displays. However, prior research on visualization and interaction techniques has mostly explored a single input modality such as mouse, touch, pen, or more recently, natural language. The unique challenges and opportunities of synergistic multimodal interaction for data visualization have yet to be investigated. This workshop will bring together researchers with expertise in visualization, interaction design, and natural user interfaces. We aim to build a community of researchers focusing on multimodal interaction for data visualization, explore opportunities and challenges in our research, and establish an agenda for multimodal interaction research specifically for data visualization.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130117418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Investigating augmented reality support for novice users in circuit prototyping","authors":"Andrea Bellucci, Alberto Ruiz, P. Díaz, I. Aedo","doi":"10.1145/3206505.3206508","DOIUrl":"https://doi.org/10.1145/3206505.3206508","url":null,"abstract":"Building an electronic circuit is an error-prone activity for novice users; many errors can occur, such as incorrect wirings or wrong component values. This work explores the use of Augmented Reality (AR) as a technology to mitigate the issues that arise when users construct circuits. We present a study that investigates the effectiveness, usability, and cognitive load of AR visual instructions for circuit prototyping tasks. A mobile-based, window-on-the-world AR tool is compared to traditional media such as paper-based or monitor-displayed electronic drawings. Results show that superimposing components and instructions through AR reduces the number of errors, allows users to easily troubleshoot them and reduces users' mental workload.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123036686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EpisoDAS","authors":"Toshiyuki Masui","doi":"10.1145/3206505.3206593","DOIUrl":"https://doi.org/10.1145/3206505.3206593","url":null,"abstract":"We introduce a simple and powerful visual interaction technique for managing strong passwords. Passwords have been used for authentication for decades, but appropriate handling of passwords is difficult because people can easily forget passwords and they can be easily attacked. Better authentication methods have been investigated, and various visual interaction methods have been proposed, including the DAS (draw-a-secret) method. Using DAS, users can log into a service just by drawing a secret pattern on the screen, but remembering complex secret patterns is as difficult as remembering passwords. We developed EpisoDAS, with which users can generate strong passwords based on their secret episodic memories with a simple DAS interface. A user can draw a secret pattern and generate a password, based on their secret episodic memories that they cannot easily forget.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121519301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A multimodal interface system to support drawing diagrams in talking","authors":"Xingya Xu, Hirohito Shibata","doi":"10.1145/3206505.3206572","DOIUrl":"https://doi.org/10.1145/3206505.3206572","url":null,"abstract":"We propose a multimodal user interface system using pen and voice to draw diagrams, especially system configuration figures. We have built a system called TalkingDraw, which supports real time drawing in talking and does not interfere natural talking.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126509524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using technology for health and wellbeing","authors":"M. Czerwinski","doi":"10.1145/3206505.3206609","DOIUrl":"https://doi.org/10.1145/3206505.3206609","url":null,"abstract":"How can we create technologies to help us reflect on and change our behavior, improving our health and overall wellbeing? In this talk, I will briefly describe the last several years of work our research team has been doing in this area. We have developed wearable technology to help families manage tense situations with their children, mobile phone-based applications for handling stress and depression, as well as logging tools that can help you stay focused or recommend good times to take a break at work. The goal in all of this research is to develop tools that adapt to the user so that they can maximize their productivity and improve their health.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123980396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Sassatelli, A. Pinna-Dery, M. Winckler, Savino Dambra, Giuseppe Samela, R. Pighetti, R. Aparicio-Pardo
{"title":"Snap-changes: a dynamic editing strategy for directing viewer's attention in streaming virtual reality videos","authors":"L. Sassatelli, A. Pinna-Dery, M. Winckler, Savino Dambra, Giuseppe Samela, R. Pighetti, R. Aparicio-Pardo","doi":"10.1145/3206505.3206553","DOIUrl":"https://doi.org/10.1145/3206505.3206553","url":null,"abstract":"Cinematic Virtual Reality (VR) has the potential of touching the masses with new exciting experiences, but faces two main hurdles: one is the ability to stream these videos, another is their design and creation. Indeed, rates are much higher and in addition to discomfort and sickness that might arise in fully immersive experience with a headset, users might get lost when exploring a 360° videos and miss main elements required to understand the underlying plot. We take an innovative approach by addressing jointly the creation and streaming problems. We introduce a technique called snap-changes, aimed at directing viewers to points of interest pre-defined by the content producer. We design a VR editing tool and a custom 360° video player, to provide the content creator with the ability to drive the user's attention, and report results from two sets of user experiments that indicate that snap-changes indeed help reduce user's head motion.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122878439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Investigating class conversations with classtalk: a study with tangible object prototypes in a primary school","authors":"R. Gennari, A. Melonio, Mehdi Rizvi","doi":"10.1145/3206505.3206513","DOIUrl":"https://doi.org/10.1145/3206505.3206513","url":null,"abstract":"Interactive tangible objects can help orchestrate conversations in school classes. If such tangibles are created with a meta-design approach, for the specific context of their users, they evolve according to their usage. Specifically, tangible object prototypes are created; prototypes are adopted by their users in ecological studies; their usage is reflected over by users and designers to investigate design possibilities, which are rapidly prototyped and again adopted by users. This paper reports on the meta-design and latest evolution of ClassTalk, a tangible for conversations in primary school classes. It shows how new design ideas emerged by making users adopt ClassTalk prototypes, and by moving designers into users' context.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127303264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}