M. D. Gregorio, G. Nota, Marco Romano, M. Sebillo, G. Vitiello
{"title":"Designing usable interfaces for the Industry 4.0","authors":"M. D. Gregorio, G. Nota, Marco Romano, M. Sebillo, G. Vitiello","doi":"10.1145/3399715.3399861","DOIUrl":"https://doi.org/10.1145/3399715.3399861","url":null,"abstract":"In Industry 4.0, Human Machine Interfaces are largely used in order to increase the performances of production processes at the same time reducing the number of emergencies and accidents. In manufacturing, the most typical system used to monitor the production is the Andon. It is a graphical system exploited in plants to notify operators who deal with management, maintenance and production performance of the presence of a problem. Of course, the usability of such interfaces is essential to allow an operator to identify and react more effectively to potentially critical situations. Improving the usability of such interfaces is a big challenge due to the increasing complexity of the data that must be processed and understood quickly by operators. In this paper, we present a set of guidelines to help professional developers to design usable interfaces for monitoring industrial production in manufacturing. Such guidelines are based on usability principles and formalized by reviewing existing industrial interfaces. Using a realistic case study prepared with manufacturing experts, we propose an Andon interface that we developed to test the efficacy of these guidelines on a last generation touch-wall device.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129616119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modelling Data Visualization Interactions: from Semiotics to Pragmatics and Back to Humans","authors":"P. Buono, A. Locoro","doi":"10.1145/3399715.3399903","DOIUrl":"https://doi.org/10.1145/3399715.3399903","url":null,"abstract":"This paper makes a point of current perspectives on Data Visualization research that were essentially conceived to provide guidelines for finding the best mapping between data and visual representations. Going back to foundational concepts of HCI that rely on manipulation of visual symbols, we propose a new perspective, with the aim to focus on a different configuration, that considers visual signs, professional contexts and user practices. We argue that, so far, user practices have been neglected or left behind in design, evaluation and recommendation scenarios, reducing them to the pure relational focus among kind of data, kind of charts and in lab tasks. This may underestimate the potential of the pragmatic side of this relation, where humans manipulate and interpret signs on the basis of their \"practical knowledge, a factor that should be considered to improve human interactions with Data Visualization tools. The perspective discussed here would bring into light and help frame open problems such as interactions in routine tasks and the interpretation of data through visual interactive tools in daily professional practices. By proposing a light but formal model of investigation of these pragmatic interactions, we would like to contribute to the current debate around data visualization as the new strategic tool for dealing with the growing complexity of big data streams, digitization of life, sensor and hardware-embedded intelligence.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123396546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jesse Smith, Isaac Wang, Winston Wei, Julia Woodward, J. Ruiz
{"title":"Evaluating the Scalability of Non-Preferred Hand Mode Switching in Augmented Reality","authors":"Jesse Smith, Isaac Wang, Winston Wei, Julia Woodward, J. Ruiz","doi":"10.1145/3399715.3399850","DOIUrl":"https://doi.org/10.1145/3399715.3399850","url":null,"abstract":"Mode switching allows applications to support a wide range of operations (e.g. selection, manipulation, and navigation) using a limited input space. While the performance of different mode switching techniques has been extensively examined for pen- and touch-based interfaces, investigating mode switching in augmented reality (AR) is still relatively new. Prior work found that using non-preferred hand is an efficient mode switching technique in AR. However, it is unclear how the technique performs when increasing the number of modes, which is more indicative of real-world applications. Therefore, we examined the scalability of non-preferred hand mode switching in AR with two, four, six, and eight modes. We found that as the number of modes increase, performance plateaus after the four-mode condition. We also found that counting gestures have varying effects on mode switching performance in AR. Our findings suggest that modeling mode switching performance in AR is more complex than simply counting the number of available modes. Our work lays a foundation for understanding the costs associated with scaling interaction techniques in AR.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117240182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Angelini, G. Blasilli, S. Lenti, A. Palleschi, G. Santucci
{"title":"CrossWidgets","authors":"M. Angelini, G. Blasilli, S. Lenti, A. Palleschi, G. Santucci","doi":"10.1145/3399715.3399918","DOIUrl":"https://doi.org/10.1145/3399715.3399918","url":null,"abstract":"Filtering is one of the basic interaction techniques in Information Visualization, with the main objective of limiting the amount of displayed information using constraints on attribute values. Research focused on direct manipulation selection means or on simple interactors like sliders or check-boxes: while the interaction with a single attribute is, in principle, straightforward, getting an understanding of the relationship between multiple attribute constraints and the actual selection might be a complex task. To cope with this problem, usually referred as cross-filtering, the paper provides a general definition of the structure of a filter, based on domain values and data distribution, the identification of visual feedbacks on the relationship between filters status and the current selection, and guidance means to help in fulfilling the requested selection. Then, leveraging on the definition of these design elements, the paper proposes CrossWidgets, modular attribute selectors that provide the user with feedback and guidance during complex interaction with multiple attributes. An initial controlled experiment demonstrates the benefits that CrossWidgets provide to cross-filtering activities.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124471688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liwenhan Xie, J. O'Donnell, Benjamin Bach, Jean-Daniel Fekete
{"title":"Interactive Time-Series of Measures for Exploring Dynamic Networks","authors":"Liwenhan Xie, J. O'Donnell, Benjamin Bach, Jean-Daniel Fekete","doi":"10.1145/3399715.3399922","DOIUrl":"https://doi.org/10.1145/3399715.3399922","url":null,"abstract":"We present MeasureFlow, an interface to visually and interactively explore dynamic networks through time-series of network measures such as link number, graph density, or node activation. When networks contain many time steps, become large and more dense, or contain high frequencies of change, traditional visualizations that focus on network topology, such as animations or small multiples, fail to provide adequate overviews and thus fail to guide the analyst towards interesting time points and periods. MeasureFlow presents a complementary approach that relies on visualizing time-series of common network measures to provide a detailed yet comprehensive overview of when changes are happening and which network measures they involve. As dynamic networks undergo changes of varying rates and characteristics, network measures provide important hints on the pace and nature of their evolution and can guide an analysts in their exploration; based on a set of interactive and signal-processing methods, MeasureFlow allows an analyst to select and navigate periods of interest in the network. We demonstrate MeasureFlow through case studies with real-world data.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126461061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. De Marsico, Emanuele Panizzi, Francesca Romana Mattei, A. Musolino, Manuel Prandini, Marzia Riso, D. Sforza
{"title":"Virtual bowling: launch as you all were there!","authors":"M. De Marsico, Emanuele Panizzi, Francesca Romana Mattei, A. Musolino, Manuel Prandini, Marzia Riso, D. Sforza","doi":"10.1145/3399715.3399848","DOIUrl":"https://doi.org/10.1145/3399715.3399848","url":null,"abstract":"This work proposes BowlingVR, an advanced Virtual Reality (VR) multiplayer game that tackles two main goals: the first one is to provide a realistic User eXperience (UX) to the user, by reproducing the dynamics and physical context of a real bowling challenge; the second one is to allow a remote, distributed, socially satisfying gameplay, providing the user the illusion of the real presence of the remote players. The prototype was evaluated using a modified version of SUXES, a kind of user interview schema that was originally devised for multimedia applications and that has been modified in order to better compare the responses of different users and get a more reliable estimation of user appreciation.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131716512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Veronika Bogina, Julia Sheidin, T. Kuflik, S. Berkovsky
{"title":"Visualizing Program Genres' Temporal-Based Similarity in Linear TV Recommendations","authors":"Veronika Bogina, Julia Sheidin, T. Kuflik, S. Berkovsky","doi":"10.1145/3399715.3399813","DOIUrl":"https://doi.org/10.1145/3399715.3399813","url":null,"abstract":"There is an increasing evidence that data visualization is an important and useful tool for quick understanding and filtering of large amounts of data. In this paper, we contribute to this body of work with a study that compares chord and ranked list for presentation of a temporal TV program genre similarity in next-program recommendations. We consider genre similarity based on the similarity of temporal viewing patterns. We discover that chord presentation allows users to see the whole picture and improves their ability to choose items beyond the ranked list of top similar items. We believe that similarity visualization may be useful for the provision of both the recommendations and their explanations to the end users.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123347345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interactive Human Centered Artificial Intelligence: A Definition and Research Challenges","authors":"A. Schmidt","doi":"10.1145/3399715.3400873","DOIUrl":"https://doi.org/10.1145/3399715.3400873","url":null,"abstract":"Artificial Intelligence (AI) has become the buzzword of the last decade. Advances so far have been largely technical with a focus on machine learning (ML). Only recently have we begun seeing a shift towards focusing on the human aspects of artificial intelligence, centered on the narrow view of making AI interactive and explainable. In this paper I suggest a definition for \"Interactive Human Centered Artificial Intelligence and outline the required properties. Staying in control is essential for humans to feel safe and have self-determination. Hence, we need to find ways for humans to understand AI based systems and means to allow human control and oversight. In our work, we argue that levels of abstractions and granularity of control are a general solution to this. Furthermore, it is essential that we make explicit why we want AI and what are the goals of AI research and development. We need to state the properties that we expect of future intelligent systems and who will benefit from a system or service. For me, AI and ML are very much comparable to raw materials (like stone, iron, or bronze). Historical periods are named after these materials as they fundamentally changed what humans can build and what tools humans can engineer. Hence, I argue that in the AI age we need to shift the focus from the material (e.g. the AI algorithms, as there will be plenty of material) towards the tools and infrastructures that are enabled which are beneficial to humans. It is apparent that AI will allow the automation of mental routine tasks and that it will extend our ability to perceive the world and foresee events. For me, the central question is how to create these tools for amplifying the human mind without compromising human values.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123658589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Valentino Artizzu, Davide Fara, Riccardo Macis, L. D. Spano
{"title":"FeedBucket","authors":"Valentino Artizzu, Davide Fara, Riccardo Macis, L. D. Spano","doi":"10.1145/3399715.3399947","DOIUrl":"https://doi.org/10.1145/3399715.3399947","url":null,"abstract":"Standard development libraries for Virtual and Mixed Reality support haptic feedback through low-level parameters, which do not guide developers in creating effective interactions. In this paper, we report some preliminary results on a simplified structure for the creation, assignment and execution of haptic feedback for standard controllers with the optional feature of synchronizing an haptic pattern to an auditory feedback. In addition, we present the results of a preliminary test investigating the users' ability in recognizing variations in intensity and/or duration of the stimulus, especially when the two dimensions are combined for encoding information.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123721660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. F. Abate, Aniello Castiglione, Michele Nappi, Ignazio Passero
{"title":"DELEX","authors":"A. F. Abate, Aniello Castiglione, Michele Nappi, Ignazio Passero","doi":"10.1145/3399715.3399820","DOIUrl":"https://doi.org/10.1145/3399715.3399820","url":null,"abstract":"Recent advances in Machine Learning have unveiled interesting possibilities for real-time investigating about user characteristics and expressions like, but not limited to, age, sex, body posture, emotions and moods. These new opportunities lay the foundations for new HCI tools for interactive applications that adopt user emotions as a communication channel. This paper presents an Emotion Controlled User Experience that changes according to user feelings and emotions analysed at runtime. Aiming at obtaining a preliminary evaluation of the proposed ecosystem, a controlled experiment has been performed in an engineering and software development company, where 60 people have been involved as volunteers. The subjective evaluation has been based on a standard questionnaire commonly adopted for measuring user perceived sense of immersion in Virtual Environments. The results of the controlled experiment encourage further investigations strengthen by the analysis of objective performance measurements and user physiological parameters.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125430901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}