Proceedings of the 2018 International Conference on Advanced Visual Interfaces最新文献

筛选
英文 中文
Demonstrating vistiles: visual data exploration using mobile devices 演示可视化:使用移动设备进行可视化数据探索
Proceedings of the 2018 International Conference on Advanced Visual Interfaces Pub Date : 2018-05-29 DOI: 10.1145/3206505.3206583
R. Langner, Tom Horak, Raimund Dachselt
{"title":"Demonstrating vistiles: visual data exploration using mobile devices","authors":"R. Langner, Tom Horak, Raimund Dachselt","doi":"10.1145/3206505.3206583","DOIUrl":"https://doi.org/10.1145/3206505.3206583","url":null,"abstract":"We demonstrate the prototype of the conceptual VisTiles framework. VisTiles allows exploring multivariate data sets by using multiple coordinated views that are distributed across a set of mobile devices. This setup allows users to benefit from dynamic and user-defined interface arrangements and to easily initiate co-located data exploration sessions. The current web-based prototype runs on commodity devices and is able to determine the spatial device arrangement by either a cross-device pinch gesture or an external tracking system. Multiple data sets are provided that can be explored by different visualizations (e.g., scatterplots, parallel coordinate plots, stream graphs). With this demonstration, we showcase the general concepts of VisTiles and discuss ideas for enhancements as well the potential for application cases beyond data analysis.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126268604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Effect of temporality, physical activity and cognitive load on spatiotemporal vibrotactile pattern recognition 时间、身体活动和认知负荷对时空振动触觉模式识别的影响
Proceedings of the 2018 International Conference on Advanced Visual Interfaces Pub Date : 2018-05-29 DOI: 10.1145/3206505.3206511
Qing Chen, S. Perrault, Quentin Roy, L. Wyse
{"title":"Effect of temporality, physical activity and cognitive load on spatiotemporal vibrotactile pattern recognition","authors":"Qing Chen, S. Perrault, Quentin Roy, L. Wyse","doi":"10.1145/3206505.3206511","DOIUrl":"https://doi.org/10.1145/3206505.3206511","url":null,"abstract":"Previous research demonstrated the ability for users to accurately recognize Spatiotemporal Vibrotactile Patterns (SVP): sequences of vibrations on different motors occurring either sequentially or simultaneously. However, the experiments were only run in a lab setting and the ability for users to recognize SVP in a real-world environment remains unclear. In this paper, we investigate how several factors may affect recognition: (1) physical activity (running), (2) cognitive task (i.e. primary task, typing), (3) distribution of the vibration motors across body parts and (4) temporality of the patterns. Our results suggest that physical activity has very little impact, specifically compared to cognitive task, location of the vibrations or temporality. We discuss these results and propose a set of guidelines for the design of SVPs.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128037789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
VISKOMMP: graph visualization meets meeting documentation VISKOMMP:图形可视化满足会议文档
Proceedings of the 2018 International Conference on Advanced Visual Interfaces Pub Date : 2018-05-29 DOI: 10.1145/3206505.3206565
Janine Kasper, Robert Richter, F. Thalmann, Rainer Groh
{"title":"VISKOMMP: graph visualization meets meeting documentation","authors":"Janine Kasper, Robert Richter, F. Thalmann, Rainer Groh","doi":"10.1145/3206505.3206565","DOIUrl":"https://doi.org/10.1145/3206505.3206565","url":null,"abstract":"In VISKOMMP (visual, collaborative, multi-meeting minutes system) we aim at supporting users during all stages of meeting-participation with focus on the preservation and accessibility of the produced information. For the efficient use of the knowledge generated during meetings, a comprehensive view of the aggregated data, independent of single events or documents, is necessary. An approach is presented which interlinks the heterogeneous information that is generated during meetings with the enterprise-knowledge. Created content and the established connections are further presented to the user in a comprehensible way. To this end, semantic technologies are utilized and an own ontology is designed, which covers the domains of project-management and meetings.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125683929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Building a qualified annotation dataset for skin lesion analysis trough gamification 通过游戏化建立一个合格的皮肤病变分析注释数据集
Proceedings of the 2018 International Conference on Advanced Visual Interfaces Pub Date : 2018-05-29 DOI: 10.1145/3206505.3206555
Fabrizio Balducci, P. Buono
{"title":"Building a qualified annotation dataset for skin lesion analysis trough gamification","authors":"Fabrizio Balducci, P. Buono","doi":"10.1145/3206505.3206555","DOIUrl":"https://doi.org/10.1145/3206505.3206555","url":null,"abstract":"The deep learning approach has increased the quality of automatic medical diagnoses at the cost of building qualified datasets to train and test such supervised machine learning methods. Image annotation is one of the main activity of dermatologists and the quality of annotation depends on the physician experience and on the number of studied cases: manual annotations are very useful to extract features like contours, intersections and shapes that can be used in the processes of lesion segmentation and classification made by automatic agents. This paper proposes the design of an interactive multimedia platform that enhance the annotation process of medical images, in the domain of dermatology, adopting gamification and \"games with a purpose\" (GWAP) strategies in order to improve the engagement and the production of qualified datasets also fostering their sharing and practical evaluation. A special attention is given to the design choices, theories and assumptions as well as the implementation and technological details.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129927293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Crossing spaces: towards cross-media personal information management user interfaces 跨空间:迈向跨媒体个人信息管理用户界面
Proceedings of the 2018 International Conference on Advanced Visual Interfaces Pub Date : 2018-05-29 DOI: 10.1145/3206505.3206528
Sandra Trullemans, Payam Ebrahimi, B. Signer
{"title":"Crossing spaces: towards cross-media personal information management user interfaces","authors":"Sandra Trullemans, Payam Ebrahimi, B. Signer","doi":"10.1145/3206505.3206528","DOIUrl":"https://doi.org/10.1145/3206505.3206528","url":null,"abstract":"Nowadays, digital and paper documents are used simultaneously during daily tasks. While significant research has been carried out to support the re-finding of digital documents, less effort has been made to provide similar functionality for paper documents. In this paper, we present a solution that enables the design of cross-media Personal Information Management (PIM) user interfaces helping users in re-finding documents across digital and physical information spaces. We propose three main design requirements for the presented cross-media PIM user interfaces. Further, we illustrate how these design requirements have been applied in the development of three proof-of-concept applications and describe a software framework supporting the design of these interfaces. Finally, we discuss opportunities for future improvements of the presented cross-media PIM user interfaces.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133190421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Two-level artificial-landmark scrollbars to improve revisitation in long documents 两级人工地标滚动条,以改善长文件的浏览
Proceedings of the 2018 International Conference on Advanced Visual Interfaces Pub Date : 2018-05-29 DOI: 10.1145/3206505.3206588
Ehsan Sotoodeh Mollashahi, Md. Sami Uddin, C. Gutwin
{"title":"Two-level artificial-landmark scrollbars to improve revisitation in long documents","authors":"Ehsan Sotoodeh Mollashahi, Md. Sami Uddin, C. Gutwin","doi":"10.1145/3206505.3206588","DOIUrl":"https://doi.org/10.1145/3206505.3206588","url":null,"abstract":"Navigating to previously-visited pages is a trivial yet fundamental task in linear control-based document viewers. These widgets e.g., scrollbars often do not work well particularly for long documents. Existing solutions try to tackle this issue with bookmarks, search, history, and read wear but limited in terms of effort, clutter, and interpretability. To improve the revisitation support in long documents, we investigated the use of artificial landmarks similar to the visual augmentations available in physical books: coloring on page edges or indents cut into pages. We developed several artificial-landmark visualizations to represent page-locations in the scrollbar for many hundreds of pages long documents, and tested them in studies where participants visited multiple locations in long documents. Results indicate that using two columns of landmark icons significantly improved revisitation performance and preferred by users. Our two-level artificial-landmark augmented scrollbars can be a new way to support spatial memory development of long documents - and can be used either in isolation or in congregation with current techniques.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134193422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Big data landscapes: improving the visualization of machine learning-based clustering algorithms 大数据景观:改进基于机器学习的聚类算法的可视化
Proceedings of the 2018 International Conference on Advanced Visual Interfaces Pub Date : 2018-05-29 DOI: 10.1145/3206505.3206556
D. Kammer, Mandy Keck, Thomas Gründer, Rainer Groh
{"title":"Big data landscapes: improving the visualization of machine learning-based clustering algorithms","authors":"D. Kammer, Mandy Keck, Thomas Gründer, Rainer Groh","doi":"10.1145/3206505.3206556","DOIUrl":"https://doi.org/10.1145/3206505.3206556","url":null,"abstract":"With the internet, massively heterogeneous data sources need to be understood and classified to provide suitable services to users such as content observation, data exploration, e-commerce, or adaptive learning environments. The key to providing these services is applying machine learning (ML) in order to generate structures via clustering and classification. Due to the intricate processes involved in ML, visual tools are needed to support designing and evaluating the ML pipelines. In this contribution, we propose a comprehensive tool that facilitates the analysis and design of ML-based clustering algorithms using multiple visualization features such as semantic zoom, glyphs, and histograms.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128222426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Visual exploration and analysis of the italian cybersecurity framework 意大利网络安全框架的视觉探索与分析
Proceedings of the 2018 International Conference on Advanced Visual Interfaces Pub Date : 2018-05-29 DOI: 10.1145/3206505.3206579
M. Angelini, G. Blasilli, S. Lenti, G. Santucci
{"title":"Visual exploration and analysis of the italian cybersecurity framework","authors":"M. Angelini, G. Blasilli, S. Lenti, G. Santucci","doi":"10.1145/3206505.3206579","DOIUrl":"https://doi.org/10.1145/3206505.3206579","url":null,"abstract":"In the last years, several standards and frameworks have been developed to help organizations to increase the security of their Information Technology (IT) systems. In order to deal with the continuous evolution of the cyber-attacks complexity, such solutions have to cope with an overwhelming set of concepts, and are perceived as complex and hard to implement. The exploration of the cyber-security state of an organization can be made more effective and proficient if supported by the right level of automation. This paper presents the implementation of a visual analytics solution, called CybeR secUrity fraMework BrowSer (CRUMBS) [2], targeted at dealing with the Italian Adaptation of the Cyber Security Framework (IACSF), derived by the National Institute of Standards and Technology (NIST) proposal [1], adaptation that, in its full complexity, presents the security managers with hundreds of scattered concepts, like functions, categories, subcategories, priorities, maturity levels, current and target profiles, and controls, making its adoption a complex activity. The prototype is available at: http://awareserver.dis.uniroma1.it:11768/crumbs/.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133995538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Choreomorphy Choreomorphy
Proceedings of the 2018 International Conference on Advanced Visual Interfaces Pub Date : 2018-05-29 DOI: 10.1145/3206505.3206507
K. E. Raheb, George Tsampounaris, A. Katifori, Yannis E. Ioannidis
{"title":"Choreomorphy","authors":"K. E. Raheb, George Tsampounaris, A. Katifori, Yannis E. Ioannidis","doi":"10.1145/3206505.3206507","DOIUrl":"https://doi.org/10.1145/3206505.3206507","url":null,"abstract":"Choreomorphy is inspired by the Greek words \"choros\" (dance) and \"morphe\" (shape). Visual metaphors, such as the notion of transformation, and visual imagery are widely used in various movement and dance practices, education, and artistic creation. Motion capture and comprehensive movement representation technologies, if appropriately employed can become valuable tools in this field. Choreomorphy is a system for a whole-body interactive experience, using Motion Capture and 3D technologies, that allows the users to experiment with different body and movement visualisations in real-time. The system offers a variety of avatars, visualizations of movement and environments which can be easily selected through a simple GUI. The motivation of designing this system is the exploration of different avatars as \"digital selves\" and the reflection on the impact of seeing one's own body as an avatar that can vary in shape, size, gender and human vs. non-human characteristics, while dancing and improvising. Choreomorphy is interoperable with different motion capture systems, including, but not limited to inertial, optical, and Kinect. The 3D representations and interactions are constantly updated through an explorative co-design process with dance artists and professionals in different sessions and venues.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134266973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
The invisible gorilla revisited: using eye tracking to investigate inattentional blindness in interface design 重新审视看不见的大猩猩:使用眼动追踪研究界面设计中的无意失明
Proceedings of the 2018 International Conference on Advanced Visual Interfaces Pub Date : 2018-05-29 DOI: 10.1145/3206505.3206550
H. Gelderblom, Leanne Menge
{"title":"The invisible gorilla revisited: using eye tracking to investigate inattentional blindness in interface design","authors":"H. Gelderblom, Leanne Menge","doi":"10.1145/3206505.3206550","DOIUrl":"https://doi.org/10.1145/3206505.3206550","url":null,"abstract":"Interface designers often use change and movement to draw users' attention. Research on change blindness and inattentional blindness challenges this approach. In Simons and Chabris' 1999, \"Gorillas in our midst\" experiment, they showed how people that are focused on a task are likely to miss the occurrence of an unforeseen event (a man in a gorilla suit in their case), even if it appears in their field of vision. This relates to interface design because interfaces often include moving elements such as rotating banners or advertisements, which designers obviously want users to notice. We investigated how inattentional blindness affect users' perception through an eye tracking investigation on Simons and Chabris' video as well as on the web site of an airline that uses a rotating banner to advertise special deals. In both cases users performed tasks that required their full attention and were then interviewed to determine to what extent they perceived the changes or new information. We compared the results of the two experiments to see how Simons and Chabris' theory applies to interface design. Our findings show that although 43% of the participants had fixations on the gorilla, only 22% said that they noticed it. On the web site, 75% of participants had fixations on the moving banner but only 33% could recall any information related to it. We offer reasons for these results and provide designers with advice on how to address the effect of inattentional blindness and change blindness in their designs.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134379210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信