2022 26th International Conference Information Visualisation (IV)最新文献

筛选
英文 中文
Visualization and Visual Knowledge Discovery from Big Uncertain Data 大不确定数据的可视化和可视化知识发现
2022 26th International Conference Information Visualisation (IV) Pub Date : 2022-07-01 DOI: 10.1109/IV56949.2022.00062
C. Leung, Evan W. R. Madill, Adam G. M. Pazdor
{"title":"Visualization and Visual Knowledge Discovery from Big Uncertain Data","authors":"C. Leung, Evan W. R. Madill, Adam G. M. Pazdor","doi":"10.1109/IV56949.2022.00062","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00062","url":null,"abstract":"In the current uncertain world, data are kept growing bigger. Big data refer to the data flow of huge volume, high velocity, wide variety, and different levels of veracity (e.g., precise data, imprecise/uncertain data). Embedded in these big data are implicit, previously unknown, but valuable information and knowledge. With huge volumes of information and knowledge that can be discovered by techniques like data mining, a challenge is to validate and visualize the data mining results. To validate data for better data aggregation in estimation and prediction and for establishing trustworthy artificial intelligence, the synergy of visualization models and data mining strategies are needed. Hence, in this paper, we present a solution for visualization and visual knowledge discovery from big uncertain data. Our solution aims to discover knowledge in the form of frequently co-occurring patterns from big uncertain data and visualize the discovered knowledge. In particular, the solution shows the upper and lower bounds on frequency of these patterns. Evaluation with real-life Coronavirus disease 2019 (COVID-19) data demonstrates the effectiveness and practicality of our solution in visualization and visual knowledge discovery from big health informatics data collected from the current uncertain world.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"172 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133809823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Code-Space Quality Evaluation for Information Visualization 面向信息可视化的代码空间质量评价
2022 26th International Conference Information Visualisation (IV) Pub Date : 2022-07-01 DOI: 10.1109/IV56949.2022.00029
Ying Zhu
{"title":"Code-Space Quality Evaluation for Information Visualization","authors":"Ying Zhu","doi":"10.1109/IV56949.2022.00029","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00029","url":null,"abstract":"The quality evaluation is essential to creating effective data visualization designs. The data visualization research community has produced many quality metrics for evaluating data visualization. However, these quality metrics are rarely integrated into popular data visualization tools. As a result, most data visualization creators are either not aware of these quality metrics or do not know how to apply these metrics to the visualization creation process. In this paper, we propose a novel quality evaluation method that integrates quality metrics into popular data visualization programming tools. Our main contribution is a code-space quality evaluation method, different from the traditional image-space or data-space quality evaluation method. Using our method, a visualization programmer passes a coded data visualization design to a quality evaluation function that generates warnings, comments, and design recommendations. This allows users to integrate quality checks into the design process.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121720260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Affective Color Palette Recommendations with Non-negative Tensor Factorization 基于非负张量分解的情感调色板推荐
2022 26th International Conference Information Visualisation (IV) Pub Date : 2022-07-01 DOI: 10.1109/IV56949.2022.00016
Ikuya Morita, Shigeo Takahashi, Satoshi Nishimura, Kazuo Misue
{"title":"Affective Color Palette Recommendations with Non-negative Tensor Factorization","authors":"Ikuya Morita, Shigeo Takahashi, Satoshi Nishimura, Kazuo Misue","doi":"10.1109/IV56949.2022.00016","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00016","url":null,"abstract":"Color is an essential factor that influences human perception, and thus, the proper selection of color sets is crucial in creating informative and appealing visual content. Furthermore, the choice of such color palettes often reflects the underlying emotional intention of creators, especially when they want to introduce specific affective styles. This paper presents a color palette recommendation system that facilitates preferred colors and affective expressions in visual content. This is accomplished by introducing non-negative tensor factorization (NTF), which extends the conventional matrix-based collaborative filtering for recommending items through ratings of multiple users. In our approach, we composed a rating tensor that constitutes the scores for colors in terms of affective factors provided by participants in the user study. With this rating tensor, we explored the meaningful relation between affective expression and color preference. Our experiments exposed that we can successfully apply a tensor-based approach to recommending convincing sets of colors in several possible cases by predicting the underlying emotional intentions in the visual content design.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"181 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121831634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How originality looks like. Integrating visualization and meta-heuristics to dissect music plagiarism 创意是什么样子的。结合可视化和元启发式来剖析音乐剽窃
2022 26th International Conference Information Visualisation (IV) Pub Date : 2022-07-01 DOI: 10.1109/IV56949.2022.00052
N. Lettieri, R. De Prisco, Delfina Malandrino, R. Zaccagnino, Alfonso Guarino
{"title":"How originality looks like. Integrating visualization and meta-heuristics to dissect music plagiarism","authors":"N. Lettieri, R. De Prisco, Delfina Malandrino, R. Zaccagnino, Alfonso Guarino","doi":"10.1109/IV56949.2022.00052","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00052","url":null,"abstract":"Plagiarism is a debated and controversial topic in different fields. For example, in Law, where the subjectivity of the judges that have to pronounce a suspicious case usually lead to long and often unsolved cases, and in Music, where huge amounts of money are invested every year to face and try to solve suspicious cases. In this scenario, the automatic detection of music plagiarism is fundamental by representing useful support for judges during their pronouncements and an important result to avoid musicians spending more time in court than on composing music. This paper shows how the combination of visual analytics and the employment of adaptive meta-heuristics can assist domain experts in judging suspicious cases. Solutions will be presented as part of PlagiarismDetection, a cross-platform tool that leverages text-similarity algorithms, computational intelligence, optimization methods, and visualization techniques to enable new critical approaches to music plagiarism analysis.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117082551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Observation and Visualization of Subjectivity-based Annotation Tasks 基于主观性的标注任务的观察与可视化
2022 26th International Conference Information Visualisation (IV) Pub Date : 2022-07-01 DOI: 10.1109/IV56949.2022.00023
Rika Miura, Ami Tochigi, T. Itoh
{"title":"Observation and Visualization of Subjectivity-based Annotation Tasks","authors":"Rika Miura, Ami Tochigi, T. Itoh","doi":"10.1109/IV56949.2022.00023","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00023","url":null,"abstract":"Annotation is an upstream process for constructing training data for machine learning tasks. The reliability of annotation is very important for the reliability of machine learning. The annotations vary from worker to worker, and differences in these tendencies may impair the reliability of the data. This is especially relevant for tasks that depend on the subjectivity of the workers. This study aims to realize reliable annotation by observing the annotation results of workers. As a specific example, we applied the annotations of three workers who evaluated facial expressions by the Likert scale on 977 face images as a subject. We verified the reliability of the annotations from the visualization results.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117167419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimation of Older Driver's Cognitive Performance and Workload Using Features of Eye movement and Pupil Response on Test Routes 利用测试路线上的眼动和瞳孔反应特征估计老年驾驶员的认知表现和工作量
2022 26th International Conference Information Visualisation (IV) Pub Date : 2022-07-01 DOI: 10.1109/IV56949.2022.00033
M. Nakayama, Q. Sun, J. Xia
{"title":"Estimation of Older Driver's Cognitive Performance and Workload Using Features of Eye movement and Pupil Response on Test Routes","authors":"M. Nakayama, Q. Sun, J. Xia","doi":"10.1109/IV56949.2022.00033","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00033","url":null,"abstract":"In order to evaluate cognitive performance and the level of task mental workload of older drivers, some features of oculo-motors such as eye movement and pupillary response were extracted during car driving on a public road. The individual cognitive performance of 11 selected older drivers was measured in advance using some conventional tests including Manoeuvre and the useful field of view (UFOV). Changes in the extracted features deviated along with the test route, which was classified into 5 groups according to the type of route. The regression relationships between the features of oculo-motors and the cognitive test scores was created using the LASSO technique, where fitness and feature selections are evaluated. The predicted scores for driver's cognitive performance and dependency of route group were evaluated and the overall possibility of the estimating driver's conditions was confirmed. These results suggest that older driver's eye movements during driving reflect their cognitive abilities and level of mental workload.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115557718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Glyph-Based Visual Analysis of Q-Leaning Based Action Policy Ensembles on Racetrack 基于q - learning的赛马场行动策略集合的字形可视化分析
2022 26th International Conference Information Visualisation (IV) Pub Date : 2022-07-01 DOI: 10.1109/IV56949.2022.00011
David Groß, M. Klauck, Timo P. Gros, Marcel Steinmetz, Jörg Hoffmann, S. Gumhold
{"title":"Glyph-Based Visual Analysis of Q-Leaning Based Action Policy Ensembles on Racetrack","authors":"David Groß, M. Klauck, Timo P. Gros, Marcel Steinmetz, Jörg Hoffmann, S. Gumhold","doi":"10.1109/IV56949.2022.00011","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00011","url":null,"abstract":"Recently, deep reinforcement learning has become very successful in making complex decisions, achieving super-human performance in Go, chess, and challenging video games. When applied to safety-critical applications, however, like the control of cyber-physical systems with a learned action policy, the need for certification arises. To empower domain experts to decide whether to trust a learned action policy, we propose visualization methods for a detailed assessment of action policies implemented as neural networks trained with Q-learning. We propose a highly responsive visual analysis tool that fosters efficient analysis of Q-learning based action policies over the complete state space of the system, which is essential for verification and gaining detailed insights on policy quality. For efficient visual inspection of the per-action Q-value rating over the state space, we designed three glyphs that provide different levels of detail. In particular, we introduce the two-dimensional Q-Glyph that visually encodes Q-values in a compact manner while preserving directional information of the actions. Placing glyphs in ordered stacks allows for simultaneous inspection of policy ensembles, that for example result from Q-learning meta parameter studies. Further analysis of the policy is supported by enabling inspection of individual traces generated from a chosen start state. A user study was conducted to evaluate the effectiveness of our tool applied to the Racetrack case study, which is a commonly used benchmark in the AI community abstracting driving control.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115700352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
In-Place Collaboration in Extended Reality Data Visualization 扩展现实数据可视化中的就地协作
2022 26th International Conference Information Visualisation (IV) Pub Date : 2022-07-01 DOI: 10.1109/IV56949.2022.00044
Heidi Abdelhamed, Nourhan El-Faransawy, Nada Sharaf
{"title":"In-Place Collaboration in Extended Reality Data Visualization","authors":"Heidi Abdelhamed, Nourhan El-Faransawy, Nada Sharaf","doi":"10.1109/IV56949.2022.00044","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00044","url":null,"abstract":"Throughout the past centuries technologies have evolved, and we are always looking for ways to make every persons' life easier. Analysing datasets consumes so much time, so interacting with real-life data visualisations is an innovative way to analyse any dataset. Augmented Reality is a new technology that have unlimited ideas, also collaborating over an augmented reality experience is adding a new way of implementing augmented reality to increase humans interaction with the data. This paper is about integrating a collaboration augmented reality experience over dataset visualisation. This paper scope includes the design, implementation and evaluation of the augmented reality experience. This AR experience includes 3 main modules which are Image tracking, data-set visualisation and collaboration. After implementing the AR experience surveys where taken from participants to measure experience usability and participants' task load.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117345347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepFingerPCANet: Automatic Fingerprint Classification Using Deep Learning DeepFingerPCANet:基于深度学习的自动指纹分类
2022 26th International Conference Information Visualisation (IV) Pub Date : 2022-07-01 DOI: 10.1109/IV56949.2022.00081
M. Hussain, Fahman Saeed, Hatim Aboalsamh, Abdul Wadood
{"title":"DeepFingerPCANet: Automatic Fingerprint Classification Using Deep Learning","authors":"M. Hussain, Fahman Saeed, Hatim Aboalsamh, Abdul Wadood","doi":"10.1109/IV56949.2022.00081","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00081","url":null,"abstract":"Fingerprints are expanding in popularity, and the fingerprint datasets are becoming increasingly huge; they are recorded using a range of sensors embedded in smart devices like mobile phones and personal computers. The difficulty of fingerprint recognition systems is worsened when they are obtained using different sensors, which is one of the main challenges. Fingerprints can be categorized in a database to reduce the search space and speed up the query response. However, classifying cross-sensor fingerprints is a challenging problem. An efficient and robust solution is to use a convolutional neural network (CNN), but designing its architecture is time-consuming. In order to automatically design a CNN model for fingerprint classification, we developed a strategy that uses pyramidal clustering, principal component analysis (PCA), and the ratio of the between-class scatter to within-class scatter to determine the number of filters and the number of layers in the model automatically. It aids in the building of lightweight CNN models that are efficient and speed up fingerprint classification. We validated the proposed method on two benchmark datasets, FingerPass and FVC2004, which feature noisy, low-quality fingerprints obtained via live scan devices and various sensors. Compared to existing fingerprint classification methods and well-known pre-trained models, the newly developed models perform noticeably better.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124994842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interactive Web-based 3D Viewer for Multidimensional Microscope Imaging Modalities 交互式基于web的三维查看器多维显微镜成像模式
2022 26th International Conference Information Visualisation (IV) Pub Date : 2022-07-01 DOI: 10.1109/IV56949.2022.00069
Yubraj Gupta, R. E. D. Guerrero, C. Costa, Rui Jesus, Eduardo Pinho, Luís Bastião
{"title":"Interactive Web-based 3D Viewer for Multidimensional Microscope Imaging Modalities","authors":"Yubraj Gupta, R. E. D. Guerrero, C. Costa, Rui Jesus, Eduardo Pinho, Luís Bastião","doi":"10.1109/IV56949.2022.00069","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00069","url":null,"abstract":"Recent advancements in the acquisition of digital imaging modalities with high-throughput technologies, such as confocal laser scanner microscopy (CLSM) and focused-ion beam scanning electron microscopy (FIB-SEM), are providing researchers with unprecedented opportunities to collect massive amounts of multidimensional datasets. This data can be used to visualize the internal structure of tiny particles (mostly cells and organisms) or to develop analytic algorithms. Visualizing newly obtained multidimensional microscope imaging data is beyond the capabilities of traditional 3D visualization packages, as it carries much information in the form of additional dimensions. Typically, these extra dimensions correspond to space, time, and channels, which has driven the development of new visualization applications. In this article, we describe the design and implementation of an interactive web-based multidimensional 3D visualization tool for CLSM and FIB-SEM microscope imaging modalities. The proposed 3D visualization application accepts DICOM files as input and provides a variety of visualization choices ranging from 3D volume/surface rendering to multiplanar reconstruction approaches. The solution performance was tested by uploading and rendering microscopy images of distinct modalities.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123660120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信