Visual Informatics最新文献

筛选
英文 中文
Importance guided stream surface generation and feature exploration 重要的是导流面生成和特征勘探
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2023-06-01 DOI: 10.1016/j.visinf.2023.05.002
Kunhua Su, Jun Zhang, Deyue Xie, Jun Tao
{"title":"Importance guided stream surface generation and feature exploration","authors":"Kunhua Su,&nbsp;Jun Zhang,&nbsp;Deyue Xie,&nbsp;Jun Tao","doi":"10.1016/j.visinf.2023.05.002","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.05.002","url":null,"abstract":"<div><p>Exploring flow features and patterns hidden behind the data has received extensive academic attention in flow visualization. In this paper, we introduce an importance-guided surface generation and exploration scheme to explore the features and their connections. The features are expressed as an importance field, which can either be derived from a scalar field or be specified as a flow pattern. Guided by the importance field, we sample a pool of seeding curves along the binormal direction and construct stream surfaces to fit the regions of high- importance values. Our scheme evaluates candidate seeding curves by collecting importance scores from the curve and corresponding streamlines. The candidate seeding curves are refined using the high-score segments to identify the optimal surfaces. Comparative visualization among different kinds of flow features across time steps can be easily derived for flow structure analysis. In order to reduce the visual complexity, we leverage SurfRiver to achieve clearer observation by flattening and aligning the surface. Finally, we apply our surface generation scheme guided by flow patterns and scalar fields to evaluate the effectiveness of the proposed tool.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 2","pages":"Pages 54-63"},"PeriodicalIF":3.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49709994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
INPHOVIS: Interactive visual analytics for smartphone-based digital phenotyping INPHOVIS:基于智能手机的数字表型交互式可视化分析
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2023-06-01 DOI: 10.1016/j.visinf.2023.01.002
Hamid Mansoor, Walter Gerych, Abdulaziz Alajaji, Luke Buquicchio, Kavin Chandrasekaran, Emmanuel Agu, Elke Rundensteiner, Angela Incollingo Rodriguez
{"title":"INPHOVIS: Interactive visual analytics for smartphone-based digital phenotyping","authors":"Hamid Mansoor,&nbsp;Walter Gerych,&nbsp;Abdulaziz Alajaji,&nbsp;Luke Buquicchio,&nbsp;Kavin Chandrasekaran,&nbsp;Emmanuel Agu,&nbsp;Elke Rundensteiner,&nbsp;Angela Incollingo Rodriguez","doi":"10.1016/j.visinf.2023.01.002","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.01.002","url":null,"abstract":"<div><p>Digital phenotyping is the characterization of human behavior patterns based on data from digital devices such as smartphones in order to gain insights into the users’ state and especially to identify ailments. To support supervised machine learning, digital phenotyping requires gathering data from study participants’ smartphones as they live their lives. Periodically, participants are then asked to provide ground truth labels about their health status. Analyzing such complex data is challenging due to limited contextual information and imperfect health/wellness labels. We propose INteractive PHOne-o-typing VISualization (INPHOVIS), an interactive visual framework for exploratory analysis of smartphone health data to study phone-o-types. Prior visualization work has focused on mobile health data with clear semantics such as steps or heart rate data collected using dedicated health devices and wearables such as smartwatches. However, unlike smartphones which are owned by over 85 percent of the US population, wearable devices are less prevalent thus reducing the number of people from whom such data can be collected. In contrast, the “low-level” sensor data (e.g., accelerometer or GPS data) supported by INPHOVIS can be easily collected using smartphones. Data visualizations are designed to provide the essential contextualization of such data and thus help analysts discover complex relationships between observed sensor values and health-predictive phone-o-types. To guide the design of INPHOVIS, we performed a hierarchical task analysis of phone-o-typing requirements with health domain experts. We then designed and implemented multiple innovative visualizations integral to INPHOVIS including stacked bar charts to show diurnal behavioral patterns, calendar views to visualize day-level data along with bar charts, and correlation views to visualize important wellness predictive data. We demonstrate the usefulness of INPHOVIS with walk-throughs of use cases. We also evaluated INPHOVIS with expert feedback and received encouraging responses.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 2","pages":"Pages 13-29"},"PeriodicalIF":3.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49732550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Visual analytics of multivariate networks with representation learning and composite variable construction 基于表示学习和复合变量构造的多元网络可视化分析
3区 计算机科学
Visual Informatics Pub Date : 2023-06-01 DOI: 10.1016/j.visinf.2023.06.004
Hsiao-Ying Lu, Takanori Fujiwara, Ming-Yi Chang, Yang-chih Fu, Anders Ynnerman, Kwan-Liu Ma
{"title":"Visual analytics of multivariate networks with representation learning and composite variable construction","authors":"Hsiao-Ying Lu, Takanori Fujiwara, Ming-Yi Chang, Yang-chih Fu, Anders Ynnerman, Kwan-Liu Ma","doi":"10.1016/j.visinf.2023.06.004","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.06.004","url":null,"abstract":"Multivariate networks are commonly found in real-world data-driven applications. Uncovering and understanding the relations of interest in multivariate networks is not a trivial task. This paper presents a visual analytics workflow for studying multivariate networks to extract associations between different structural and semantic characteristics of the networks (e.g., what are the combinations of attributes largely relating to the density of a social network?). The workflow consists of a neural-network-based learning phase to classify the data based on the chosen input and output attributes, a dimensionality reduction and optimization phase to produce a simplified set of results for examination, and finally an interpreting phase conducted by the user through an interactive visualization interface. A key part of our design is a composite variable construction step that remodels nonlinear features obtained by neural networks into linear features that are intuitive to interpret. We demonstrate the capabilities of this workflow with multiple case studies on networks derived from social media usage and also evaluate the workflow through an expert interview.","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136178267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DenseCL: A simple framework for self-supervised dense visual pre-training DenseCL:一个用于自监督密集视觉预训练的简单框架
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2023-03-01 DOI: 10.1016/j.visinf.2022.09.003
Xinlong Wang , Rufeng Zhang , Chunhua Shen , Tao Kong
{"title":"DenseCL: A simple framework for self-supervised dense visual pre-training","authors":"Xinlong Wang ,&nbsp;Rufeng Zhang ,&nbsp;Chunhua Shen ,&nbsp;Tao Kong","doi":"10.1016/j.visinf.2022.09.003","DOIUrl":"https://doi.org/10.1016/j.visinf.2022.09.003","url":null,"abstract":"<div><p>Self-supervised learning aims to learn a universal feature representation without labels. To date, most existing self-supervised learning methods are designed and optimized for image classification. These pre-trained models can be sub-optimal for dense prediction tasks due to the discrepancy between image-level prediction and pixel-level prediction. To fill this gap, we aim to design an effective, dense self-supervised learning framework that directly works at the level of pixels (or local features) by taking into account the correspondence between local features. Specifically, we present dense contrastive learning (DenseCL), which implements self-supervised learning by optimizing a pairwise contrastive (dis)similarity loss at the pixel level between two views of input images. Compared to the supervised ImageNet pre-training and other self-supervised learning methods, our self-supervised DenseCL pre-training demonstrates consistently superior performance when transferring to downstream dense prediction tasks including object detection, semantic segmentation and instance segmentation. Specifically, our approach significantly outperforms the strong MoCo-v2 by 2.0% AP on PASCAL VOC object detection, 1.1% AP on COCO object detection, 0.9% AP on COCO instance segmentation, 3.0% mIoU on PASCAL VOC semantic segmentation and 1.8% mIoU on Cityscapes semantic segmentation. The improvements are up to 3.5% AP and 8.8% mIoU over MoCo-v2, and 6.1% AP and 6.1% mIoU over supervised counterpart with frozen-backbone evaluation protocol.</p><p>Code and models are available at: <span>https://git.io/DenseCL</span><svg><path></path></svg></p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 1","pages":"Pages 30-40"},"PeriodicalIF":3.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49761390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse RGB-D images create a real thing: A flexible voxel based 3D reconstruction pipeline for single object 稀疏的RGB-D图像创建了一个真实的东西:一个灵活的基于体素的单个对象的3D重建管道
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2023-03-01 DOI: 10.1016/j.visinf.2022.12.002
Fei Luo , Yongqiong Zhu , Yanping Fu , Huajian Zhou , Zezheng Chen , Chunxia Xiao
{"title":"Sparse RGB-D images create a real thing: A flexible voxel based 3D reconstruction pipeline for single object","authors":"Fei Luo ,&nbsp;Yongqiong Zhu ,&nbsp;Yanping Fu ,&nbsp;Huajian Zhou ,&nbsp;Zezheng Chen ,&nbsp;Chunxia Xiao","doi":"10.1016/j.visinf.2022.12.002","DOIUrl":"https://doi.org/10.1016/j.visinf.2022.12.002","url":null,"abstract":"<div><p>Reconstructing 3D models for single objects with complex backgrounds has wide applications like 3D printing, AR/VR, and so on. It is necessary to consider the tradeoff between capturing data at low cost and getting high-quality reconstruction results. In this work, we propose a voxel-based modeling pipeline with sparse RGB-D images to effectively and efficiently reconstruct a single real object without the geometrical post-processing operation on background removal. First, referring to the idea of VisualHull, useless and inconsistent voxels of a targeted object are clipped. It helps focus on the target object and rectify the voxel projection information. Second, a modified TSDF calculation and voxel filling operations are proposed to alleviate the problem of depth missing in the depth images. They can improve TSDF value completeness for voxels on the surface of the object. After the mesh is generated by the MarchingCube, texture mapping is optimized with view selection, color optimization, and camera parameters fine-tuning. Experiments on Kinect capturing dataset, TUM public dataset, and virtual environment dataset validate the effectiveness and flexibility of our proposed pipeline.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 1","pages":"Pages 66-76"},"PeriodicalIF":3.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49761393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The use of facial expressions in measuring students’ interaction with distance learning environments during the COVID-19 crisis 在COVID-19危机期间,使用面部表情来衡量学生与远程学习环境的互动
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2023-03-01 DOI: 10.1016/j.visinf.2022.10.001
Waleed Maqableh , Faisal Y. Alzyoud , Jamal Zraqou
{"title":"The use of facial expressions in measuring students’ interaction with distance learning environments during the COVID-19 crisis","authors":"Waleed Maqableh ,&nbsp;Faisal Y. Alzyoud ,&nbsp;Jamal Zraqou","doi":"10.1016/j.visinf.2022.10.001","DOIUrl":"10.1016/j.visinf.2022.10.001","url":null,"abstract":"<div><p>Digital learning is becoming increasingly important in the crisis COVID-19 and is widespread in most countries. The proliferation of smart devices and 5G telecommunications systems are contributing to the development of digital learning systems as an alternative to traditional learning systems. Digital learning includes blended learning, online learning, and personalized learning which mainly depends on the use of new technologies and strategies, so digital learning is widely developed to improve education and combat emerging disasters such as COVID-19 diseases. Despite the tremendous benefits of digital learning, there are many obstacles related to the lack of digitized curriculum and collaboration between teachers and students. Therefore, many attempts have been made to improve the learning outcomes through the following strategies: collaboration, teacher convenience, personalized learning, cost and time savings through professional development, and modeling. In this study, facial expressions and heart rates are used to measure the effectiveness of digital learning systems and the level of learners’ engagement in learning environments. The results showed that the proposed approach outperformed the known related works in terms of learning effectiveness. The results of this research can be used to develop a digital learning environment.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 1","pages":"Pages 1-17"},"PeriodicalIF":3.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9595381/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9359944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
PCP-Ed: Parallel coordinate plots for ensemble data 集成数据的平行坐标图
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2023-03-01 DOI: 10.1016/j.visinf.2022.10.003
Elif E. Firat , Ben Swallow , Robert S. Laramee
{"title":"PCP-Ed: Parallel coordinate plots for ensemble data","authors":"Elif E. Firat ,&nbsp;Ben Swallow ,&nbsp;Robert S. Laramee","doi":"10.1016/j.visinf.2022.10.003","DOIUrl":"https://doi.org/10.1016/j.visinf.2022.10.003","url":null,"abstract":"<div><p>The Parallel Coordinate Plot (PCP) is a complex visual design commonly used for the analysis of high-dimensional data. Increasing data size and complexity may make it challenging to decipher and uncover trends and outliers in a confined space. A dense PCP image resulting from overlapping edges may cause patterns to be covered. We develop techniques aimed at exploring the relationship between data dimensions to uncover trends in dense PCPs. We introduce correlation glyphs in the PCP view to reveal the strength of the correlation between adjacent axis pairs as well as an interactive glyph lens to uncover links between data dimensions by investigating dense areas of edge intersections. We also present a subtraction operator to identify differences between two similar multivariate data sets and relationship-guided dimensionality reduction by collapsing axis pairs. We finally present a case study of our techniques applied to ensemble data and provide feedback from a domain expert in epidemiology.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 1","pages":"Pages 56-65"},"PeriodicalIF":3.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49761392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identifying, exploring, and interpreting time series shapes in multivariate time intervals 识别,探索和解释多变量时间间隔的时间序列形状
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2023-03-01 DOI: 10.1016/j.visinf.2023.01.001
Gota Shirato , Natalia Andrienko , Gennady Andrienko
{"title":"Identifying, exploring, and interpreting time series shapes in multivariate time intervals","authors":"Gota Shirato ,&nbsp;Natalia Andrienko ,&nbsp;Gennady Andrienko","doi":"10.1016/j.visinf.2023.01.001","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.01.001","url":null,"abstract":"<div><p>We introduce a concept of <em>episode</em> referring to a time interval in the development of a dynamic phenomenon that is characterized by multiple time-variant attributes. A data structure representing a single episode is a multivariate time series. To analyse collections of episodes, we propose an approach that is based on recognition of particular <em>patterns</em> in the temporal variation of the variables within episodes. Each episode is thus represented by a combination of patterns. Using this representation, we apply visual analytics techniques to fulfil a set of analysis tasks, such as investigation of the temporal distribution of the patterns, frequencies of transitions between the patterns in episode sequences, and co-occurrences of patterns of different variables within same episodes. We demonstrate our approach on two examples using real-world data, namely, dynamics of human mobility indicators during the COVID-19 pandemic and characteristics of football team movements during episodes of ball turnover.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 1","pages":"Pages 77-91"},"PeriodicalIF":3.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49732987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
TCMFVis: A visual analytics system toward bridging together traditional Chinese medicine and modern medicine TCMFVis:一个连接传统中医和现代医学的可视化分析系统
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2023-03-01 DOI: 10.1016/j.visinf.2022.11.001
Yichao Jin , Fuli Zhu , Jianhua Li , Lei Ma
{"title":"TCMFVis: A visual analytics system toward bridging together traditional Chinese medicine and modern medicine","authors":"Yichao Jin ,&nbsp;Fuli Zhu ,&nbsp;Jianhua Li ,&nbsp;Lei Ma","doi":"10.1016/j.visinf.2022.11.001","DOIUrl":"https://doi.org/10.1016/j.visinf.2022.11.001","url":null,"abstract":"<div><p>Although traditional Chinese medicine (TCM) and modern medicine (MM) have considerably different treatment philosophies, they both make important contributions to human health care. TCM physicians usually treat diseases using TCM formula (TCMF), which is a combination of specific herbs, based on the holistic philosophy of TCM, whereas MM physicians treat diseases using chemical drugs that interact with specific biological molecules. The difference between the holistic view of TCM and the atomistic view of MM hinders their combination. Tools that are able to bridge together TCM and MM are essential for promoting the combination of these disciplines. In this paper, we present TCMFVis, a visual analytics system that would help domain experts explore the potential use of TCMFs in MM at the molecular level. TCMFVis deals with two significant challenges, namely, (<em>i</em>) intuitively obtaining valuable insights from heterogeneous data involved in TCMFs and (<em>ii</em>) efficiently identifying the common features among a cluster of TCMFs. In this study, a four-level (herb-ingredient-target-disease) visual analytics framework was designed to facilitate the analysis of heterogeneous data in a proper workflow. Several set visualization techniques were first introduced into the system to facilitate the identification of common features among TCMFs. Case studies on two groups of TCMFs clustered by function were conducted by domain experts to evaluate TCMFVis. The results of these case studies demonstrate the usability and scalability of the system.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 1","pages":"Pages 41-55"},"PeriodicalIF":3.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49761388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VISHIEN-MAAT: Scrollytelling visualization design for explaining Siamese Neural Network concept to non-technical users VISHIEN-MAAT:用于向非技术用户解释暹罗神经网络概念的滚动可视化设计
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2023-03-01 DOI: 10.1016/j.visinf.2023.01.004
Noptanit Chotisarn , Sarun Gulyanon , Tianye Zhang , Wei Chen
{"title":"VISHIEN-MAAT: Scrollytelling visualization design for explaining Siamese Neural Network concept to non-technical users","authors":"Noptanit Chotisarn ,&nbsp;Sarun Gulyanon ,&nbsp;Tianye Zhang ,&nbsp;Wei Chen","doi":"10.1016/j.visinf.2023.01.004","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.01.004","url":null,"abstract":"<div><p>The past decade has witnessed rapid progress in AI research since the breakthrough in deep learning. AI technology has been applied in almost every field; therefore, technical and non-technical end-users must understand these technologies to exploit them. However existing materials are designed for experts, but non-technical users need appealing materials that deliver complex ideas in easy-to-follow steps. One notable tool that fits such a profile is scrollytelling, an approach to storytelling that provides readers with a natural and rich experience at the reader’s pace, along with in-depth interactive explanations of complex concepts. Hence, this work proposes a novel visualization design for creating a scrollytelling that can effectively explain an AI concept to non-technical users. As a demonstration of our design, we created a scrollytelling to explain the Siamese Neural Network for the visual similarity matching problem. Our approach helps create a visualization valuable for a short-timeline situation like a sales pitch. The results show that the visualization based on our novel design helps improve non-technical users’ perception and machine learning concept knowledge acquisition compared to traditional materials like online articles.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 1","pages":"Pages 18-29"},"PeriodicalIF":3.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49761389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信