Computer Graphics Forum最新文献

筛选
英文 中文
Controllable Anime Image Editing via Probability of Attribute Tags 通过属性标签概率进行可控动漫图像编辑
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-24 DOI: 10.1111/cgf.15245
Zhenghao Song, Haoran Mo, Chengying Gao
{"title":"Controllable Anime Image Editing via Probability of Attribute Tags","authors":"Zhenghao Song,&nbsp;Haoran Mo,&nbsp;Chengying Gao","doi":"10.1111/cgf.15245","DOIUrl":"https://doi.org/10.1111/cgf.15245","url":null,"abstract":"<p>Editing anime images via probabilities of attribute tags allows controlling the degree of the manipulation in an intuitive and convenient manner. Existing methods fall short in the progressive modification and preservation of unintended regions in the input image. We propose a controllable anime image editing framework based on adjusting the tag probabilities, in which a probability encoding network (PEN) is developed to encode the probabilities into features that capture continuous characteristic of the probabilities. Thus, the encoded features are able to direct the generative process of a pre-trained diffusion model and facilitate the linear manipulation. We also introduce a local editing module that automatically identifies the intended regions and constrains the edits to be applied to those regions only, which preserves the others unchanged. Comprehensive comparisons with existing methods indicate the effectiveness of our framework in both one-shot and linear editing modes. Results in additional applications further demonstrate the generalization ability of our approach.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Seamless and Aligned Texture Optimization for 3D Reconstruction 为三维重建进行无缝对齐纹理优化
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-24 DOI: 10.1111/cgf.15205
Lei Wang, Linlin Ge, Qitong Zhang, Jieqing Feng
{"title":"Seamless and Aligned Texture Optimization for 3D Reconstruction","authors":"Lei Wang,&nbsp;Linlin Ge,&nbsp;Qitong Zhang,&nbsp;Jieqing Feng","doi":"10.1111/cgf.15205","DOIUrl":"https://doi.org/10.1111/cgf.15205","url":null,"abstract":"<p>Restoring the appearance of the model is a crucial step for achieving realistic 3D reconstruction. High-fidelity textures can also conceal some geometric defects. Since the estimated camera parameters and reconstructed geometry usually contain errors, subsequent texture mapping often suffers from undesirable visual artifacts such as blurring, ghosting, and visual seams. In particular, significant misalignment between the reconstructed model and the registered images will lead to texturing the mesh with inconsistent image regions. However, eliminating various artifacts to generate high-quality textures remains a challenge. In this paper, we address this issue by designing a texture optimization method to generate seamless and aligned textures for 3D reconstruction. The main idea is to detect misalignment regions between images and geometry and exclude them from texture mapping. To handle the texture holes caused by these excluded regions, a cross-patch texture hole-filling method is proposed, which can also synthesize plausible textures for invisible faces. Moreover, for better stitching of the textures from different views, an improved camera pose optimization is present by introducing color adjustment and boundary point sampling. Experimental results show that the proposed method can eliminate the artifacts caused by inaccurate input data robustly and produce high-quality texture results compared with state-of-the-art methods.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CrystalNet: Texture-Aware Neural Refraction Baking for Global Illumination CrystalNet:全局照明的纹理感知神经折射烘焙
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-24 DOI: 10.1111/cgf.15227
Z. Zhang, E. Simo-Serra
{"title":"CrystalNet: Texture-Aware Neural Refraction Baking for Global Illumination","authors":"Z. Zhang,&nbsp;E. Simo-Serra","doi":"10.1111/cgf.15227","DOIUrl":"https://doi.org/10.1111/cgf.15227","url":null,"abstract":"<p>Neural rendering bakes global illumination and other computationally costly effects into the weights of a neural network, allowing to efficiently synthesize photorealistic images without relying on path tracing. In neural rendering approaches, G-buffers obtained from rasterization through direct rendering provide information regarding the scene such as position, normal, and textures to the neural network, achieving accurate and stable rendering quality in real-time. However, due to the use of G-buffers, existing methods struggle to accurately render transparency and refraction effects, as G-buffers do not capture any ray information from multiple light ray bounces. This limitation results in blurriness, distortions, and loss of detail in rendered images that contain transparency and refraction, and is particularly notable in scenes with refracted objects that have high-frequency textures. In this work, we propose a neural network architecture to encode critical rendering information, including texture coordinates from refracted rays, and enable reconstruction of high-frequency textures in areas with refraction. Our approach is able to achieve accurate refraction rendering in challenging scenes with a diversity of overlapping transparent objects. Experimental results demonstrate that our method can interactively render high quality refraction effects with global illumination, unlike existing neural rendering approaches. Our code can be found at https://github.com/ziyangz5/CrystalNet</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PCLC-Net: Point Cloud Completion in Arbitrary Poses with Learnable Canonical Space PCLC-Net:利用可学习的典型空间完成任意姿态的点云补全
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-24 DOI: 10.1111/cgf.15217
Hanmo Xu, Qingyao Shuai, Xuejin Chen
{"title":"PCLC-Net: Point Cloud Completion in Arbitrary Poses with Learnable Canonical Space","authors":"Hanmo Xu,&nbsp;Qingyao Shuai,&nbsp;Xuejin Chen","doi":"10.1111/cgf.15217","DOIUrl":"https://doi.org/10.1111/cgf.15217","url":null,"abstract":"<p>Recovering the complete structure from partial point clouds in arbitrary poses is challenging. Recently, many efforts have been made to address this problem by developing SO(3)-equivariant completion networks or aligning the partial point clouds with a predefined canonical space before completion. However, these approaches are limited to random rotations only or demand costly pose annotation for model training. In this paper, we present a novel Network for Point cloud Completion with Learnable Canonical space (PCLC-Net) to reduce the need for pose annotations and extract SE(3)-invariant geometry features to improve the completion quality in arbitrary poses. Without pose annotations, our PCLC-Net utilizes self-supervised pose estimation to align the input partial point clouds to a canonical space that is learnable for an object category and subsequently performs shape completion in the learned canonical space. Our PCLC-Net can complete partial point clouds with arbitrary SE(3) poses without requiring pose annotations for supervision. Our PCLC-Net achieves state-of-the-art results on shape completion with arbitrary SE(3) poses on both synthetic and real scanned data. To the best of our knowledge, our method is the first to achieve shape completion in arbitrary poses without pose annotations during network training.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gaussian in the Dark: Real-Time View Synthesis From Inconsistent Dark Images Using Gaussian Splatting 黑暗中的高斯:利用高斯拼接技术从不连贯的黑暗图像中实时合成视图
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-24 DOI: 10.1111/cgf.15213
Sheng Ye, Zhen-Hui Dong, Yubin Hu, Yu-Hui Wen, Yong-Jin Liu
{"title":"Gaussian in the Dark: Real-Time View Synthesis From Inconsistent Dark Images Using Gaussian Splatting","authors":"Sheng Ye,&nbsp;Zhen-Hui Dong,&nbsp;Yubin Hu,&nbsp;Yu-Hui Wen,&nbsp;Yong-Jin Liu","doi":"10.1111/cgf.15213","DOIUrl":"https://doi.org/10.1111/cgf.15213","url":null,"abstract":"<p>3D Gaussian Splatting has recently emerged as a powerful representation that can synthesize remarkable novel views using consistent multi-view images as input. However, we notice that images captured in dark environments where the scenes are not fully illuminated can exhibit considerable brightness variations and multi-view inconsistency, which poses great challenges to 3D Gaussian Splatting and severely degrades its performance. To tackle this problem, we propose Gaussian-DK. Observing that inconsistencies are mainly caused by camera imaging, we represent a consistent radiance field of the physical world using a set of anisotropic 3D Gaussians, and design a camera response module to compensate for multi-view inconsistencies. We also introduce a step-based gradient scaling strategy to constrain Gaussians near the camera, which turn out to be floaters, from splitting and cloning. Experiments on our proposed benchmark dataset demonstrate that Gaussian-DK produces high-quality renderings without ghosting and floater artifacts and significantly outperforms existing methods. Furthermore, we can also synthesize light-up images by controlling exposure levels that clearly show details in shadow areas.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TempDiff: Enhancing Temporal-awareness in Latent Diffusion for Real-World Video Super-Resolution TempDiff:增强潜在扩散中的时间感知,实现真实世界的视频超分辨率
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-18 DOI: 10.1111/cgf.15211
Q. Jiang, Q.L. Wang, L.H. Chi, X.H. Chen, Q.Y. Zhang, R. Zhou, Z.Q. Deng, J.S. Deng, B.B. Tang, S.H. Lv, J. Liu
{"title":"TempDiff: Enhancing Temporal-awareness in Latent Diffusion for Real-World Video Super-Resolution","authors":"Q. Jiang,&nbsp;Q.L. Wang,&nbsp;L.H. Chi,&nbsp;X.H. Chen,&nbsp;Q.Y. Zhang,&nbsp;R. Zhou,&nbsp;Z.Q. Deng,&nbsp;J.S. Deng,&nbsp;B.B. Tang,&nbsp;S.H. Lv,&nbsp;J. Liu","doi":"10.1111/cgf.15211","DOIUrl":"https://doi.org/10.1111/cgf.15211","url":null,"abstract":"<p>Latent diffusion models (LDMs) have demonstrated remarkable success in generative modeling. It is promising to leverage the potential of diffusion priors to enhance performance in image and video tasks. However, applying LDMs to video super-resolution (VSR) presents significant challenges due to the high demands for realistic details and temporal consistency in generated videos, exacerbated by the inherent stochasticity in the diffusion process. In this work, we propose a novel diffusion-based framework, Temporal-awareness Latent Diffusion Model (TempDiff), specifically designed for real-world video super-resolution, where degradations are diverse and complex. TempDiff harnesses the powerful generative prior of a pre-trained diffusion model and enhances temporal awareness through the following mechanisms: 1) Incorporating temporal layers into the denoising U-Net and VAE-Decoder, and fine-tuning these added modules to maintain temporal coherency; 2) Estimating optical flow guidance using a pre-trained flow net for latent optimization and propagation across video sequences, ensuring overall stability in the generated high-quality video. Extensive experiments demonstrate that TempDiff achieves compelling results, outperforming state-of-the-art methods on both synthetic and real-world VSR benchmark datasets. Code will be available at https://github.com/jiangqin567/TempDiff</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NeuPreSS: Compact Neural Precomputed Subsurface Scattering for Distant Lighting of Heterogeneous Translucent Objects NeuPreSS:用于异质半透明物体远距离照明的紧凑型神经预计算次表面散射法
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-18 DOI: 10.1111/cgf.15234
T. TG, J. R. Frisvad, R. Ramamoorthi, H. W. Jensen
{"title":"NeuPreSS: Compact Neural Precomputed Subsurface Scattering for Distant Lighting of Heterogeneous Translucent Objects","authors":"T. TG,&nbsp;J. R. Frisvad,&nbsp;R. Ramamoorthi,&nbsp;H. W. Jensen","doi":"10.1111/cgf.15234","DOIUrl":"https://doi.org/10.1111/cgf.15234","url":null,"abstract":"<div>\u0000 <p>Monte Carlo rendering of translucent objects with heterogeneous scattering properties is often expensive both in terms of memory and computation. If the scattering properties are described by a 3D texture, memory consumption is high. If we do path tracing and use a high dynamic range lighting environment, the computational cost of the rendering can easily become significant. We propose a compact and efficient neural method for representing and rendering the appearance of heterogeneous translucent objects. Instead of assuming only surface variation of optical properties, our method represents the appearance of a full object taking its geometry and volumetric heterogeneities into account. This is similar to a neural radiance field, but our representation works for an arbitrary distant lighting environment. In a sense, we present a version of neural precomputed radiance transfer that captures relighting of heterogeneous translucent objects. We use a multi-layer perceptron (MLP) with skip connections to represent the appearance of an object as a function of spatial position, direction of observation, and direction of incidence. The latter is considered a directional light incident across the entire non-self-shadowed part of the object. We demonstrate the ability of our method to compactly store highly complex materials while having high accuracy when comparing to reference images of the represented object in unseen lighting environments. As compared with path tracing of a heterogeneous light scattering volume behind a refractive interface, our method more easily enables importance sampling of the directions of incidence and can be integrated into existing rendering frameworks while achieving interactive frame rates.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15234","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchical Spherical Cross-Parameterization for Deforming Characters 变形字符的分层球形交叉参数化
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-09-19 DOI: 10.1111/cgf.15197
Lizhou Cao, Chao Peng
{"title":"Hierarchical Spherical Cross-Parameterization for Deforming Characters","authors":"Lizhou Cao,&nbsp;Chao Peng","doi":"10.1111/cgf.15197","DOIUrl":"10.1111/cgf.15197","url":null,"abstract":"<p>The demand for immersive technology and realistic virtual environments has created a need for automated solutions to generate characters with morphological variations. However, existing approaches either rely on manual labour or oversimplify the problem by limiting it to static meshes or deformation transfers without shape morphing. In this paper, we propose a new cross-parameterization approach that semi-automates the generation of morphologically diverse characters with synthesized articulations and animations. The main contribution of this work is that our approach parameterizes deforming characters into a novel hierarchical multi-sphere domain, while considering the attributes of mesh topology, deformation and animation. With such a multi-sphere domain, our approach minimizes parametric distortion rates, enhances the bijectivity of parameterization and aligns deforming feature correspondences. The alignment process we propose allows users to focus only on major joint pairs, which is much simpler and more intuitive than the existing alignment solutions that involve a manual process of identifying feature points on mesh surfaces. Compared to recent works, our approach achieves high-quality results in the applications of 3D morphing, texture transfer, character synthesis and deformation transfer.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 6","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142247939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep SVBRDF Acquisition and Modelling: A Survey 深度 SVBRDF 采集与建模:调查
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-09-16 DOI: 10.1111/cgf.15199
Behnaz Kavoosighafi, Saghi Hajisharif, Ehsan Miandji, Gabriel Baravdish, Wen Cao, Jonas Unger
{"title":"Deep SVBRDF Acquisition and Modelling: A Survey","authors":"Behnaz Kavoosighafi,&nbsp;Saghi Hajisharif,&nbsp;Ehsan Miandji,&nbsp;Gabriel Baravdish,&nbsp;Wen Cao,&nbsp;Jonas Unger","doi":"10.1111/cgf.15199","DOIUrl":"https://doi.org/10.1111/cgf.15199","url":null,"abstract":"<p>Hand in hand with the rapid development of machine learning, deep learning and generative AI algorithms and architectures, the graphics community has seen a remarkable evolution of novel techniques for material and appearance capture. Typically, these machine-learning-driven methods and technologies, in contrast to traditional techniques, rely on only a single or very few input images, while enabling the recovery of detailed, high-quality measurements of bi-directional reflectance distribution functions, as well as the corresponding spatially varying material properties, also known as Spatially Varying Bi-directional Reflectance Distribution Functions (SVBRDFs). Learning-based approaches for appearance capture will play a key role in the development of new technologies that will exhibit a significant impact on virtually all domains of graphics. Therefore, to facilitate future research, this State-of-the-Art Report (STAR) presents an in-depth overview of the state-of-the-art in machine-learning-driven material capture in general, and focuses on SVBRDF acquisition in particular, due to its importance in accurately modelling complex light interaction properties of real-world materials. The overview includes a categorization of current methods along with a summary of each technique, an evaluation of their functionalities, their complexity in terms of acquisition requirements, computational aspects and usability constraints. The STAR is concluded by looking forward and summarizing open challenges in research and development toward predictive and general appearance capture in this field. A complete list of the methods and papers reviewed in this survey is available at computergraphics.on.liu.se/star_svbrdf_dl/.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 6","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15199","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142320558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EBPVis: Visual Analytics of Economic Behavior Patterns in a Virtual Experimental Environment EBPVis:虚拟实验环境中经济行为模式的可视化分析
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-09-13 DOI: 10.1111/cgf.15200
Yuhua Liu, Yuming Ma, Qing Shi, Jin Wen, Wanjun Zheng, Xuanwu Yue, Hang Ye, Wei Chen, Yuwei Meng, Zhiguang Zhou
{"title":"EBPVis: Visual Analytics of Economic Behavior Patterns in a Virtual Experimental Environment","authors":"Yuhua Liu,&nbsp;Yuming Ma,&nbsp;Qing Shi,&nbsp;Jin Wen,&nbsp;Wanjun Zheng,&nbsp;Xuanwu Yue,&nbsp;Hang Ye,&nbsp;Wei Chen,&nbsp;Yuwei Meng,&nbsp;Zhiguang Zhou","doi":"10.1111/cgf.15200","DOIUrl":"10.1111/cgf.15200","url":null,"abstract":"<p>Experimental economics is an important branch of economics to study human behaviours in a controlled laboratory setting or out in the field. Scientific experiments are conducted in experimental economics to collect what decisions people make in specific circumstances and verify economic theories. As a significant couple of variables in the virtual experimental environment, decisions and outcomes change with the subjective factors of participants and objective circumstances, making it a difficult task to capture human behaviour patterns and establish correlations to verify economic theories. In this paper, we present a visual analytics system, <i>EBPVis</i>, which enables economists to visually explore human behaviour patterns and faithfully verify economic theories, <i>e.g</i>. the vicious cycle of poverty and poverty trap. We utilize a Doc2Vec model to transform the economic behaviours of participants into a vectorized space according to their sequential decisions, where frequent sequences can be easily perceived and extracted to represent human behaviour patterns. To explore the correlation between decisions and outcomes, an Outcome View is designed to display the outcome variables for behaviour patterns. We also provide a Comparison View to support an efficient comparison between multiple behaviour patterns by revealing their differences in terms of decision combinations and time-varying profits. Moreover, an Individual View is designed to illustrate the outcome accumulation and behaviour patterns of subjects. Case studies, expert feedback and user studies based on a real-world dataset have demonstrated the effectiveness and practicability of <i>EBPVis</i> in the representation of economic behaviour patterns and certification of economic theories.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 6","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142247940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信