SSR-VFD: Spatial Super-Resolution for Vector Field Data Analysis and Visualization

Li Guo, Shaojie Ye, Jun Han, Hao Zheng, Han Gao, D. Chen, Jian-Xun Wang, Chaoli Wang
{"title":"SSR-VFD: Spatial Super-Resolution for Vector Field Data Analysis and Visualization","authors":"Li Guo, Shaojie Ye, Jun Han, Hao Zheng, Han Gao, D. Chen, Jian-Xun Wang, Chaoli Wang","doi":"10.1109/PacificVis48177.2020.8737","DOIUrl":null,"url":null,"abstract":"We present SSR-VFD, a novel deep learning framework that produces coherent spatial super-resolution (SSR) of three-dimensional vector field data (VFD). SSR-VFD is the first work that advocates a machine learning approach to generate high-resolution vector fields from low-resolution ones. The core of SSR-VFD lies in the use of three separate neural nets that take the three components of a low-resolution vector field as input and jointly output a synthesized high-resolution vector field. To capture spatial coherence, we take into account magnitude and angle losses in network optimization. Our method can work in the in situ scenario where VFD are down-sampled at simulation time for storage saving and these reduced VFD are upsampled back to their original resolution during postprocessing. To demonstrate the effectiveness of SSR-VFD, we show quantitative and qualitative results with several vector field data sets of different characteristics and compare our method against volume upscaling using bicubic interpolation, and two solutions based on CNN and GAN, respectively.","PeriodicalId":322092,"journal":{"name":"2020 IEEE Pacific Visualization Symposium (PacificVis)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"40","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Pacific Visualization Symposium (PacificVis)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PacificVis48177.2020.8737","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 40

Abstract

We present SSR-VFD, a novel deep learning framework that produces coherent spatial super-resolution (SSR) of three-dimensional vector field data (VFD). SSR-VFD is the first work that advocates a machine learning approach to generate high-resolution vector fields from low-resolution ones. The core of SSR-VFD lies in the use of three separate neural nets that take the three components of a low-resolution vector field as input and jointly output a synthesized high-resolution vector field. To capture spatial coherence, we take into account magnitude and angle losses in network optimization. Our method can work in the in situ scenario where VFD are down-sampled at simulation time for storage saving and these reduced VFD are upsampled back to their original resolution during postprocessing. To demonstrate the effectiveness of SSR-VFD, we show quantitative and qualitative results with several vector field data sets of different characteristics and compare our method against volume upscaling using bicubic interpolation, and two solutions based on CNN and GAN, respectively.
SSR-VFD:空间超分辨率矢量场数据分析和可视化
我们提出了SSR-VFD,一种新的深度学习框架,可以产生三维矢量场数据(VFD)的相干空间超分辨率(SSR)。SSR-VFD是第一个提倡用机器学习方法从低分辨率矢量场生成高分辨率矢量场的工作。SSR-VFD的核心在于使用三个独立的神经网络,将低分辨率矢量场的三个分量作为输入,共同输出合成的高分辨率矢量场。为了捕获空间相干性,我们考虑了网络优化中的幅度和角度损失。我们的方法可以在现场场景中工作,在模拟时对VFD进行下采样以节省存储,并在后处理期间将这些减少的VFD上采样回其原始分辨率。为了证明SSR-VFD的有效性,我们展示了几个不同特征的向量场数据集的定量和定性结果,并将我们的方法与使用双三次插值的体积放大进行了比较,以及分别基于CNN和GAN的两种解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信