2017 21st International Conference Information Visualisation (IV)最新文献

筛选
英文 中文
A Haptic User Interface to Assess the Mobility of the Newborn's Neck 一个触觉用户界面,以评估新生儿的颈部活动
2017 21st International Conference Information Visualisation (IV) Pub Date : 2017-11-16 DOI: 10.1109/iV.2017.48
Said-Magomed Sadulaev, R. Lapeer, Zelimkhan Gerikhanov, Edward Morris
{"title":"A Haptic User Interface to Assess the Mobility of the Newborn's Neck","authors":"Said-Magomed Sadulaev, R. Lapeer, Zelimkhan Gerikhanov, Edward Morris","doi":"10.1109/iV.2017.48","DOIUrl":"https://doi.org/10.1109/iV.2017.48","url":null,"abstract":"A virtual reality program has been developed to assess the strength and flexibility of a computer based model of a term fetus or newborn baby's neck. The software has a haptic/force feedback user interface which allows clinical experts to adjust the mechanical properties, including range of motion and mechanical stiffness of a newborn neck model, at runtime. The developed software was assessed by ten clinical experts in obstetrics. The empirically obtained stiffness and range of motion values corresponded well with values reported in the literature.","PeriodicalId":410876,"journal":{"name":"2017 21st International Conference Information Visualisation (IV)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115225423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Acceptance and Usability of Interactive Infographics in Online Newspapers 在线报纸中交互式信息图表的接受度和可用性
2017 21st International Conference Information Visualisation (IV) Pub Date : 2017-07-11 DOI: 10.1109/iV.2017.65
S. Zwinger, Julia Langer, M. Zeiller
{"title":"Acceptance and Usability of Interactive Infographics in Online Newspapers","authors":"S. Zwinger, Julia Langer, M. Zeiller","doi":"10.1109/iV.2017.65","DOIUrl":"https://doi.org/10.1109/iV.2017.65","url":null,"abstract":"Interactive infographics are a powerful tool to represent and communicate complex information. In datadriven journalism journalists use interactive infographics to explain new insights and facts while telling complex stories on the basis of retrieved data. However, readers of online news are still unexperienced while using interactive infographics. The results of a user survey among readers of online newspapers show how readers use and interact with interactive infographics in online newspapers. To improve the acceptance among users and to identify success factors of their utilization the results of a usability study of interactive infographics are presented.","PeriodicalId":410876,"journal":{"name":"2017 21st International Conference Information Visualisation (IV)","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121243056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
The Role of Perspective Cues in RSVP 视角线索在回复邀请中的作用
2017 21st International Conference Information Visualisation (IV) Pub Date : 2017-07-11 DOI: 10.1109/iV.2017.52
Joshua Brown, M. Witkowski, James Mardell, K. Wittenburg, R. Spence
{"title":"The Role of Perspective Cues in RSVP","authors":"Joshua Brown, M. Witkowski, James Mardell, K. Wittenburg, R. Spence","doi":"10.1109/iV.2017.52","DOIUrl":"https://doi.org/10.1109/iV.2017.52","url":null,"abstract":"Riffling the pages of a book, perhaps in the search for a specific image, is an example of Rapid Serial Visual Presentation (RSVP). Even at a pace of 10 images per second, successful search is often possible. Interest in RSVP arises because a digital embodiment of RSVP has many applications.There are many possible 'modes' of RSVP. However, a mode can be especially helpful if, after the appearance of an image, and without delaying the arrival of other images, it can remain in view for a second or two to allow a user to confirm that a desired image has been found. Moreover, if a collection of images is presented in such a way as to be perceived as moving in 3D space, it is thought that the search for an individual image can thereby be enhanced by comparison with a 2D presentation.To test this conjecture we devise and use the \"Deep-Flat\" visual illusion whereby a column of moving images magnifying in size is perceived as approaching the viewer as in a 3D space. When the images are presented in an equivalent way horizontally as a row, the viewer tends to see this as images growing in size, but now on a flat (2D) plane. We tested comparable RSVP designs in these two illusions to ascertain the relative effects of 2D and 3D style presentation under precisely controlled conditions. Elicited data included both performance measures (e.g., recognition success), and user preferences and opinions.We established the effectiveness of RSVP using the illusion. When tested under directly comparable conditions, we concluded that performance is not significantly affected by the illusion of depth, but that the inclusion of certain background cues can have a significantly detrimental effect on performance.","PeriodicalId":410876,"journal":{"name":"2017 21st International Conference Information Visualisation (IV)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124103735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Music Plagiarism at a Glance: Metrics of Similarity and Visualizations 音乐剽窃一览:相似性和可视化的度量
2017 21st International Conference Information Visualisation (IV) Pub Date : 2017-07-11 DOI: 10.1109/iV.2017.49
R. Prisco, A. Esposito, N. Lettieri, Delfina Malandrino, Donato Pirozzi, Gianluca Zaccagnino, R. Zaccagnino
{"title":"Music Plagiarism at a Glance: Metrics of Similarity and Visualizations","authors":"R. Prisco, A. Esposito, N. Lettieri, Delfina Malandrino, Donato Pirozzi, Gianluca Zaccagnino, R. Zaccagnino","doi":"10.1109/iV.2017.49","DOIUrl":"https://doi.org/10.1109/iV.2017.49","url":null,"abstract":"The plagiarism is a debated topic in different fields and in particular in music, given the huge amount of money that music is able to generate. Moreover, it is controversial aspect in the law's field given the subjectivity of the judges that have to pronounce on a suspicious case. Automatic detection of music plagiarism is fundamental to overcome these limits by representing an useful support for judges during their pronouncements and an important result to avoid musicians to spend more time in court than on composing and playing music.In this paper we address this issue by defining a new metric to discover pop music similarity and we study whether visualization can assist domain experts in judging suspicious cases. We describe a user study in which subjects performed different tasks on a song collection using different visual representations to investigate which one is best in terms of intuitiveness and accuracy. Results provided us with positive feedback about our choices and some useful suggestions for future directions.","PeriodicalId":410876,"journal":{"name":"2017 21st International Conference Information Visualisation (IV)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125995787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Converting Night-Time Images to Day-Time Images through a Deep Learning Approach 通过深度学习方法将夜间图像转换为白天图像
2017 21st International Conference Information Visualisation (IV) Pub Date : 2017-07-11 DOI: 10.1109/iV.2017.16
N. Capece, U. Erra, Raffaele Scolamiero
{"title":"Converting Night-Time Images to Day-Time Images through a Deep Learning Approach","authors":"N. Capece, U. Erra, Raffaele Scolamiero","doi":"10.1109/iV.2017.16","DOIUrl":"https://doi.org/10.1109/iV.2017.16","url":null,"abstract":"This paper examines the application of a deep learning approach to converting night-time images to day-time images. In particular, we show that a convolutional neural network enables the simulation of artificial and ambient light on images. In this paper, we illustrate the design of the deep neural network and some preliminary results on a real indoor environment and two virtual environments rendered with a 3D graphics engine. The experimental results are encouraging and confirm that a convolutional neural network is an interesting approach in the fields of photo-editing and digital image postprocessing.","PeriodicalId":410876,"journal":{"name":"2017 21st International Conference Information Visualisation (IV)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133977804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Urban Fusion: Visualizing Urban Data Fused with Social Feeds via a Game Engine 城市融合:通过游戏引擎将城市数据与社交信息融合
2017 21st International Conference Information Visualisation (IV) Pub Date : 2017-07-11 DOI: 10.1109/iV.2017.33
J. Perháč, Wei Zeng, Shiho Asada, S. Arisona, S. Schubiger-Banz, R. Burkhard, Bernhard Klein
{"title":"Urban Fusion: Visualizing Urban Data Fused with Social Feeds via a Game Engine","authors":"J. Perháč, Wei Zeng, Shiho Asada, S. Arisona, S. Schubiger-Banz, R. Burkhard, Bernhard Klein","doi":"10.1109/iV.2017.33","DOIUrl":"https://doi.org/10.1109/iV.2017.33","url":null,"abstract":"This paper presents a framework which allows urban planners to navigate and interact with large datasets fused with social feeds in real-time, enhanced by a virtual reality (VR) capability, which further promotes the knowledge discovery process and allows to interact with urban data in natural yet immersive way. A challenge in urban planning is making decisions based on datasets which are many times ambiguous, together with effective use of newly available yet unstructured sources of information like social media. Providing expert users with novel ways of representing knowledge can be beneficial for decision making. Game engines have evolved into capable testbeds for novel visualization and interaction techniques. We therefore explore the possibility of using a modern game engine as a platform for knowledge representation in urban planning and how it can be used to model ambiguity. We also investigate how urban planners can benefit from immersion when it comes to data exploration and knowledge discovery. We apply the concept of using primitives to publicly available transportation datasets and social feeds of New York city, we discuss a gesture-based VR extension of our framework and lastly, we conclude the paper with feedback from expert users in urban planning and with an outlook of future challenges.","PeriodicalId":410876,"journal":{"name":"2017 21st International Conference Information Visualisation (IV)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126603520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Analysis of Game Development Activity Using Team-Based Learning 基于团队学习的游戏开发活动分析
2017 21st International Conference Information Visualisation (IV) Pub Date : 2017-07-11 DOI: 10.1109/iV.2017.73
Akiko Teranishi, M. Nakayama, T. Wyeld, M. Eid
{"title":"Analysis of Game Development Activity Using Team-Based Learning","authors":"Akiko Teranishi, M. Nakayama, T. Wyeld, M. Eid","doi":"10.1109/iV.2017.73","DOIUrl":"https://doi.org/10.1109/iV.2017.73","url":null,"abstract":"Game development activity using Team-based Learning (TBL) was investigated in order to identify factors contributing to the usability of the product. In this study, three teams from two different countries are compared. As the related factors, the followings were examined to analyze the relationships with usability scores: (1) learning reflection, (2) social media communications within teams, and (3) participants’ characteristics and information literacy. Usability scores were conveyed using a System Usability Scale (SUS) by evaluations from the other teams. The participants’ characteristics and information literacy were measured before starting the project as a pre-test. The discussions and communications via social media of each group were categorized as: Proposal, Permission, Encouragement, and Acknowledgment, using protocol analysis to examine their contributions towards the usability scores. After completing the study project, a learning reflection questionnaire was completed by all participants to evaluate efficacy, satisfaction and achievement of learning, and difficulties.","PeriodicalId":410876,"journal":{"name":"2017 21st International Conference Information Visualisation (IV)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130439912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Visualization Practices in Scandinavian Newsrooms: A Qualitative Study 斯堪的纳维亚新闻编辑室的可视化实践:一项定性研究
2017 21st International Conference Information Visualisation (IV) Pub Date : 2017-07-11 DOI: 10.1109/iV.2017.54
Martin Engebretsen, H. Kennedy, Wibke Weber
{"title":"Visualization Practices in Scandinavian Newsrooms: A Qualitative Study","authors":"Martin Engebretsen, H. Kennedy, Wibke Weber","doi":"10.1109/iV.2017.54","DOIUrl":"https://doi.org/10.1109/iV.2017.54","url":null,"abstract":"The visualization of numeric data is becoming an important element in journalism, and new tools and platforms make the development of data visualization in the news discourse accelerate. In this paper we present an interview study investigating this development in Scandinavian newsrooms. Editorial leaders, data journalists, graphic designers, and developers in 10 major news organizations in Norway, Sweden and Denmark inform the study on a range of issues concerning visual practices and experiences in the newsrooms. Elements of tension are revealed on issues concerning the role and effect of complex, exploratory data visualizations and concerning the role of ordinary journalists in the production of simpler charts and graphs. The results presented are the first outcome of a larger ongoing study investigating visual practices in six European countries.","PeriodicalId":410876,"journal":{"name":"2017 21st International Conference Information Visualisation (IV)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124912906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Identifying the Relationships Between the Visualization Context and Representation Components to Enable Recommendations for Designing New Visualizations 识别可视化上下文和表示组件之间的关系,以便为设计新的可视化提供建议
2017 21st International Conference Information Visualisation (IV) Pub Date : 2017-07-11 DOI: 10.1109/iV.2017.55
Alma Cantu, O. Grisvard, Thierry Duval, G. Coppin
{"title":"Identifying the Relationships Between the Visualization Context and Representation Components to Enable Recommendations for Designing New Visualizations","authors":"Alma Cantu, O. Grisvard, Thierry Duval, G. Coppin","doi":"10.1109/iV.2017.55","DOIUrl":"https://doi.org/10.1109/iV.2017.55","url":null,"abstract":"In this paper we address the question of the relationships between visualization challenges and the representation components that provide solutions to these challenges. Our approach involves extracting such relationships through an identification of the context and the components of a significant number of representations and a comparison of the result to existing theoretical studies. To make such an identification possible, we rely on a characterization of the representation context based on a thoughtful aggregation of existing characterizations of the data type, the tasks and the context of use of the representations. We illustrate our approach on a use-case with examples of a relationships extraction and of a comparison of that relationships to the theory. We believe that the establishment of such relationships makes it possible to understand the mechanisms behind the representations, in order to build a representation design recommendation tool. Such a tool will enable us to recommend the components to use in a representation, given a visualization challenge to address.","PeriodicalId":410876,"journal":{"name":"2017 21st International Conference Information Visualisation (IV)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127102344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Visually Supporting Image Annotation Based on Visual Features and Ontologies 基于视觉特征和本体的视觉支持图像标注
2017 21st International Conference Information Visualisation (IV) Pub Date : 2017-07-11 DOI: 10.1109/iV.2017.27
Jalila Filali, Hajer Baazaoui Zghal, J. Martinet
{"title":"Visually Supporting Image Annotation Based on Visual Features and Ontologies","authors":"Jalila Filali, Hajer Baazaoui Zghal, J. Martinet","doi":"10.1109/iV.2017.27","DOIUrl":"https://doi.org/10.1109/iV.2017.27","url":null,"abstract":"Automatic Image Annotation (AIA) is a challenging problem in the field of image retrieval, and several methods have been proposed. However, visually supporting this important tasks and reducing the semantic gap between low-level image features and high-level semantic concepts still remains a key issue. In this paper, we propose a visually supporting image annotation framework based on visual features and ontologies. Our framework relies on three main components: (i) extraction and classification of features component, (ii) ontology’s building component and (iii) image annotation component. Our goal consists on improving the visual image annotation by:(1) extracting invariant and complex visual features; (2) integrating feature classification results and semantic concepts to build ontology and (3) combining both visual and semantic similarities during the image annotation process.","PeriodicalId":410876,"journal":{"name":"2017 21st International Conference Information Visualisation (IV)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129058558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信