Aliaksei Miniukovich, Simone Sulpizio, A. D. Angeli
{"title":"Visual complexity of graphical user interfaces","authors":"Aliaksei Miniukovich, Simone Sulpizio, A. D. Angeli","doi":"10.1145/3206505.3206549","DOIUrl":"https://doi.org/10.1145/3206505.3206549","url":null,"abstract":"Graphical User Interfaces (GUIs) of low visual complexity tend to have higher aesthetics, usability and accessibility, and result in higher user satisfaction. Despite a few authors recently used or studied visual complexity, the concept of visual complexity still needs to be better defined for the use in HCI research and GUI design, with its underlying aspects systematized and operationalized, and different measures validated. This paper reviews the aspects of GUI visual complexity and operationalizes four aspects with nine computation-based measures in total. Two user studies validated the measures on two types of stimuli - webpages (study 1, n = 55) and book pages (study 2, n = 150) - with two user groups, dyslexics (people with reading difficulties) and typical readers. The same complexity aspects could be expected to determine complexity perception for both GUI types, whereas different complexity aspects could be expected to determine complexity perception for dyslexics, relative to typical readers. However, the studies showed little to no difference between dyslexics and average readers, whereas web pages did differ from book pages in what aspects made them seem complex. It was not the intergroup differences, but the stimulus type that defined criteria to judge visual complexity. Future research and visual design could rely on the visual complexity aspects outlined in this paper.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127365895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Impact of semantic aids on command memorization for on-body interaction and directional gestures","authors":"Bruno Fruchard, É. Lecolinet, O. Chapuis","doi":"10.1145/3206505.3206524","DOIUrl":"https://doi.org/10.1145/3206505.3206524","url":null,"abstract":"Previous studies have shown that spatial memory and semantic aids can help users learn and remember gestural commands. Using the body as a support to combine both dimensions has therefore been proposed, but no formal evaluations have yet been reported. In this paper, we compare an on-body interaction technique (BodyLoci) to mid-air Marking menus in a virtual reality context. We consider three levels of semantic aids: no aid, story-making, and story-making with background images. Our results show important improvement when story-making is used, especially for Marking menus (28.5% better retention). Both techniques performed similarly without semantic aids, but Marking menus outperformed BodyLoci when using them (17.3% better retention). While our study does not show a benefit in using body support, it suggests that inducing users to leverage simple learning techniques, such as story-making, can substantially improve recall, and thus make it easier to master gestural techniques. We also analyze the strategies used by the participants for creating mnemonics to provide guidelines for future work.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126087875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Marconi, G. Schiavo, M. Zancanaro, G. Valetto, M. Pistore
{"title":"Exploring the world through small green steps: improving sustainable school transportation with a game-based learning interface","authors":"A. Marconi, G. Schiavo, M. Zancanaro, G. Valetto, M. Pistore","doi":"10.1145/3206505.3206521","DOIUrl":"https://doi.org/10.1145/3206505.3206521","url":null,"abstract":"In this paper, we present a playful digital activity for primary school classrooms that promotes sustainable and active mobility by leveraging the daily journey to school into a collaborative educational experience. In the class game, stretches of distance travelled in a sustainable way by each child contributes to the advancement of the whole school on a collective virtual trip. During the trip, several virtual stops are associated with the discovery of playful learning material. The approach has been evaluated in a primary school with 87 pupils and 6 teachers actively involved in the learning activity for 12 continuous weeks. The findings from the questionnaires with parents and the interviews with teachers show a positive effect in terms of children's behavioural change as well as educational value. Indications on the use of class and school collaborative gamification activities for supporting sustainable behavioural change are discussed.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129835423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jose F. Garcia, A. Simeone, Matthew Higgins, W. Powell, V. Powell
{"title":"Inside looking out or outside looking in?: an evaluation of visualisation modalities to support the creation of a substitutional virtual environment","authors":"Jose F. Garcia, A. Simeone, Matthew Higgins, W. Powell, V. Powell","doi":"10.1145/3206505.3206529","DOIUrl":"https://doi.org/10.1145/3206505.3206529","url":null,"abstract":"Current Virtual Reality systems only allow users to draw a rectangular perimeter to mark the room-scale area they intend to use. Domestic environments can include furniture and other obstacles that hinder the ease with which users can naturally walk. By leveraging the benefits of passive haptics, users can match physical objects with virtual counterparts, to create substitutional environments. In this paper we explore two visualisation modalities to aid in the creation of a coarse virtual representation of the physical environment, by marking out the volumes of space where physical obstacles are located, to support the substitution process. Our study investigates whether this process is better supported by an inside-looking-out 3D User Interface (that is, viewing the outside world while immersed in Virtual Reality) or from an outside-looking-in one (while viewing the Virtual Environment through an external device, such as a tablet). Results show that the immersive option resulted in better accuracy and was the one with the highest overall preference ratings.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121743827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards understanding the usability of vibrotactile support for indoor orientation","authors":"G. Mori, F. Paternò, C. Santoro","doi":"10.1145/3206505.3206584","DOIUrl":"https://doi.org/10.1145/3206505.3206584","url":null,"abstract":"This study aims to understand the potential of using vibrotactile stimulation for indoor orientation in complex, unfamiliar buildings. Four vibrotactile prototypes have been analysed and tested in initial trials in order to investigate the benefits and the problems of each solution. The main goal of this study is to reach a better understanding of the design aspects that make a vibrotactile solution intuitive and effective.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121695423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Does image grid visualization affect password strength and creation time in graphical authentication?","authors":"C. Katsini, G. Raptis, C. Fidas, N. Avouris","doi":"10.1145/3206505.3206546","DOIUrl":"https://doi.org/10.1145/3206505.3206546","url":null,"abstract":"Nowadays, technological advances introduce new visualization and user interaction possibilities. Focusing on the user authentication domain, graphical passwords are considered a better fit for interaction environments which lack a physical keyboard. Nonetheless, the current graphical user authentication schemes are deployed in conventional layouts, which introduce security vulnerabilities associated with the strength of the user selected passwords. Aiming to investigate the effectiveness of advanced visualization layouts in selecting stronger passwords, this paper reports a between-subject study, comparing two different design layouts a two-dimensional and a three dimensional. Results provide evidence that advanced visualization techniques provide a more suitable framework for deploying graphical user authentication schemes and underpin the need for considering such techniques for providing assistive and/or adaptive mechanisms to users aiming to assist them to create stronger graphical passwords.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114585667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Chesta, Luca Corcella, Marco Manca, F. Paternò, C. Santoro
{"title":"Trigger-action programming for context-aware elderly support in practice","authors":"C. Chesta, Luca Corcella, Marco Manca, F. Paternò, C. Santoro","doi":"10.1145/3206505.3206582","DOIUrl":"https://doi.org/10.1145/3206505.3206582","url":null,"abstract":"Remote monitoring services should be strongly personalised to the specific needs, preferences, abilities and motivations of elderly, a population segment whose characteristics can largely vary and even dynamically evolve over time for the same individual, depending on changing needs and usage contexts. We present a demo showing how a platform supporting End User Development (EUD) of context-dependent applications has been customized for remotely assisting elderly people at home. The user-editable personalisation features are specified by using trigger-action rules. The platform has been integrated with various sensors and appliances, and an application for remotely monitoring older adults at home. The resulting environment supports the possibility of creating trigger-action rules that can actually be executed when relevant sensors indicate that specific events or conditions occurred.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124774726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John Tiab, Sebastian Boring, Paul Strohmeier, Anders Markussen, Jason Alexander, K. Hornbæk
{"title":"Tiltstacks","authors":"John Tiab, Sebastian Boring, Paul Strohmeier, Anders Markussen, Jason Alexander, K. Hornbæk","doi":"10.1145/3206505.3206530","DOIUrl":"https://doi.org/10.1145/3206505.3206530","url":null,"abstract":"Many shape-changing interfaces use an array of actuated rods to create a display surface; each rod working as a pixel. However, this approach only supports pixel height manipulation and cannot produce more radical shape changes of each pixel (and thus of the display). Examples of such changes include non-horizontal pixels, pixels that overhang other pixels, or variable gaps between pixels. We present a concept for composing shape-changing interfaces by vertically stacking tilt-enabled modules. Together, stacking and tilting allow us to create a more diverse range of display surfaces than using arrays. We demonstrate this concept through TiltStacks, a shape-changing prototype built using stacked linear actuators and displays. Each tiltable module provides three degrees of freedom (z-movement, roll, and pitch); two more degrees of freedom are added through stacking modules (i.e., planar x- and y-movement).","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121396602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A study on gaze guidance using artificial color shifts","authors":"Kayo Azuma, H. Koike","doi":"10.1145/3206505.3206517","DOIUrl":"https://doi.org/10.1145/3206505.3206517","url":null,"abstract":"In Web or digital signage, content providers want to guide users' attention to the intended regions. Using active visual stimuli, such as animated or flashing objects, is effective for gaze guidance; however, it has been reported that such an approach often results in unpleasant feelings for users. This paper proposed a new method for gaze guidance using artificial color shifts that does not induce unpleasant feelings in the user. We created an image filter that separates the image in three layers, i.e., cyan, magenta, and yellow, and slightly shifted each layer the left, right, and down, respectively. The filter was applied to the entire image except the region where the user's gaze was to be guided. We conducted experiments using a gaze tracker. The experimental results showed that the proposed method can guide the user's gaze to a particular region with less unpleasant feelings.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122251847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Humayoun, Syed Moiz Hasan, Ragaad Altarawneh, A. Ebert
{"title":"Visualizing software hierarchy and metrics over releases","authors":"S. Humayoun, Syed Moiz Hasan, Ragaad Altarawneh, A. Ebert","doi":"10.1145/3206505.3206548","DOIUrl":"https://doi.org/10.1145/3206505.3206548","url":null,"abstract":"Analysis and understanding of large software systems requires exploring not only the software structure but also associated software metrics over the development releases. Information visualization helps in this regard greatly through interactive visualizations in comparison to exploring these through the source code or traditional software diagrams like UML diagrams. In this paper, we present our developed visualization tool, called HiMVis, that visualizes software hierarchies and metrics through multi-views visualizations on the same screen. HiMVis visualizes packages and class hierarchies through two space-filling interactive layouts. Further, it shows on demand through multiple views more than fifty software metrics information associated to a particular class or interface over all development releases. We provide a number of interaction and filtering options in the tool to make the exploration of the underlying software system more intuitive. We also conducted a brief user study with 10 participants to determine the usability of the developed tool.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114125559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}