{"title":"Size matters: the effects of interactive display size on interaction zone expectations","authors":"C. L. Paul, Lauren Bradel","doi":"10.1145/3206505.3206506","DOIUrl":"https://doi.org/10.1145/3206505.3206506","url":null,"abstract":"The goal of our research was to understand the effects of display size on interaction zones as it applies to interactive systems. Interaction zone models for interactive displays are often static and do not consider the size of the display in their definition. As the interactive display ecosystem becomes more size diverse, current models for interaction are limited in their applicability. This paper describes the results of an exploratory study in which participants interacted with and discussed expectations with interactive displays ranging from personal to wall-sized. Our approach was open-ended rather than grounded in existing interaction zone models in order to explore potential differences in interaction zones and distances. We found that the existence of different interaction zones and the distance at which these zones are relevant are dependent on display size. In discussion of the results, we explore implications of our findings and offer guidelines for the design of interactive display systems.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115470055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Federica Delprino, Chiara Piva, Giovanni Tommasi, M. Gelsomini, Niccolò Izzo, M. Matera
{"title":"ABBOT: a smart toy motivating children to become outdoor explorers","authors":"Federica Delprino, Chiara Piva, Giovanni Tommasi, M. Gelsomini, Niccolò Izzo, M. Matera","doi":"10.1145/3206505.3206512","DOIUrl":"https://doi.org/10.1145/3206505.3206512","url":null,"abstract":"This article illustrates ABBOT, a pervasive interactive game for children at the early years of primary school that aims to stimulate exploration of outdoor environments. ABBOT combines a smart tangible object to play outdoors, with a mobile app to access new content related to the discovered natural elements. The tangible object helps children capture images of the elements they find interesting in the physical environment. Through simple interactive games on a tablet, at home children can continue to interact with the collected digital materials and can also access new related content. The article illustrates the design of ABBOT; it also reports on an exploratory study with 160 kids of a preschool and a primary school that helped us assess the attitude of kids towards the game.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114650414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A tangible-programming technology supporting end-user development of smart-environments","authors":"Giuseppe Desolda, A. Malizia, Tommaso Turchi","doi":"10.1145/3206505.3206562","DOIUrl":"https://doi.org/10.1145/3206505.3206562","url":null,"abstract":"In recent years, smart objects are increasingly pervading the environments we live in. For HCI researchers, an important challenge is how non-technical users can establish the behavior of such devices. This poster presents a new technology implementing a tangible-programming paradigm, which allows non-programmers to synchronize the behavior of ecologies of smart objects, thus determining the creation and customization of smart environments.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115756392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Caivano, Fabio Cassano, R. Lanzilotti, A. Piccinno
{"title":"Towards an IoT model for the assessment of smart devices","authors":"D. Caivano, Fabio Cassano, R. Lanzilotti, A. Piccinno","doi":"10.1145/3206505.3206587","DOIUrl":"https://doi.org/10.1145/3206505.3206587","url":null,"abstract":"The current Internet of Things (IoT) market proposes a wide variety of devices with complex design and different functionality. In addition, the same IoT device can be used in different domains, from home to industry, to healthcare. The management of such devices occurs in different ways, for example through visual interaction using high level programming languages (e.g. Event-Condition-Action rules) or through high level API. Generally, end users are not technical experts and are not able to configure their IoT devices, thus they need external tools (or visual interaction paradigm) to exploit and better control them. In this work, we present a model for IoT devices which allows to assess those devices and their suitability for a certain domain according to four dimensions: communication, target, data manipulation and development. The model aims at better understanding the device capabilities and, consequently, facilitating the choice of the devices that better suit the domain in which they should be used.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116512214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards a construction kit for visual search interfaces","authors":"Mandy Keck, Rainer Groh","doi":"10.1145/3206505.3206567","DOIUrl":"https://doi.org/10.1145/3206505.3206567","url":null,"abstract":"In recent years, many novel approaches have been propsed to exploring complex data sets. However, little guidance is available for designers to create similar solutions and to reuse established patterns. This paper is building upon our previous work that covers the development of a construction kit to support designers in creating new visual search interfaces. It provides a set of elements and patterns that can be easily combined with each other. In this paper, we present different application scenarios for using the construction kit within the design process.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"244 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122532243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spatial statistics for analyzing data in cinematic virtual reality","authors":"Sylvia Rothe, H. Hussmann","doi":"10.1145/3206505.3206561","DOIUrl":"https://doi.org/10.1145/3206505.3206561","url":null,"abstract":"Cinematic Virtual Reality has been increasing in popularity over the last years. Watching 360° movies with head mounted displays, viewers can freely choose the direction of view, and thus the visible section of the movie. In order to explore the viewers' behavior, methods are needed for collecting and analyzing data. In our experiments we compare the viewing behavior for movies with spatial and non-spatial sound and tracked the head movements of the participants. This work-in-progress describes two approaches of spatial statistics - analysis of Space Time Cubes and Getis Ord Gi* statistic - for analyzing head tracking data.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122102198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Keyboard with tactile feedback on smartphone touch screen","authors":"Emanuele Panizzi","doi":"10.1145/3206505.3206563","DOIUrl":"https://doi.org/10.1145/3206505.3206563","url":null,"abstract":"Pressing buttons on a smartphone touch screen is difficult if you are not looking at the screen. We developed a numerical keyboard that provides a tactile feedback using phone short vibrations. The feedback is provided both when the user swipes the keyboard and when he presses keys. We describe how we implemented it on iPhone7, using the iPhone 3Dtouch capability and the UIFeedbackGenerator.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123190739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. R. Barricelli, G. Fischer, D. Fogli, A. Mørch, A. Piccinno, S. Valtolina
{"title":"Cultures of participation in the digital age: design trade-offs for an inclusive society","authors":"B. R. Barricelli, G. Fischer, D. Fogli, A. Mørch, A. Piccinno, S. Valtolina","doi":"10.1145/3206505.3206599","DOIUrl":"https://doi.org/10.1145/3206505.3206599","url":null,"abstract":"This new edition of CoPDA workshop, the 5th since 2013, is dedicated to the discussion of design trade-offs that have to be addressed for embracing diversity and implementing an inclusive society. With this workshop, we invite researchers and practitioners to discuss and exchange experiences able to inform design processes in a Cultures of Participation perspective, focusing on theoretical frameworks, practical experiences, case studies, and research projects, both in Academy and Industry.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130184303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interaction modularity in multi-device systems: a conceptual approach","authors":"E. Dubois, A. Celentano","doi":"10.1145/3206505.3206559","DOIUrl":"https://doi.org/10.1145/3206505.3206559","url":null,"abstract":"In this paper we propose a conceptual view on the modularization of multi-device interactive systems. Our view is supported by the re-visitation and adaptation of some software engineering concepts. We provide the definition of interaction module and its interface and discuss how these concepts contribute to the design of such systems.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131040255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PhoToDo","authors":"Kouhei Matsuda, Satoshi Nakamura","doi":"10.1145/3206505.3206574","DOIUrl":"https://doi.org/10.1145/3206505.3206574","url":null,"abstract":"Many people manage their tasks using tools such as notebooks or personal task management applications in their smartphones. In fact, according to Microsoft's research, 78% of respondents in the United States currently have at least one task management app [1]. However, conventional task lists are sometimes troublesome because tasks usually need to be expressed in words. In addition, it takes time to understand tasks when they are described in words. However, it is known that a person can instantaneously process an image and has the ability to process many images at once [2][3]. Therefore, we propose a system called \"PhoToDo\" that enables people to use visual images to manage tasks. By using PhoToDo, users can instantly visualize all their tasks and efficiently manage them. In this paper, we propose and implement our system and show its effectiveness by conducting experimental tests.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115115772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}