Computer Graphics Forum最新文献

筛选
英文 中文
Mix-Max: A Content-Aware Operator for Real-Time Texture Transitions Mix-Max:用于实时纹理转换的内容感知操作器
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-09-05 DOI: 10.1111/cgf.15193
Romain Fournier, Basile Sauvage
{"title":"Mix-Max: A Content-Aware Operator for Real-Time Texture Transitions","authors":"Romain Fournier,&nbsp;Basile Sauvage","doi":"10.1111/cgf.15193","DOIUrl":"10.1111/cgf.15193","url":null,"abstract":"<p>Mixing textures is a basic and ubiquitous operation in data-driven algorithms for real-time texture generation and rendering. It is usually performed either by linear blending, or by cutting. We propose a new mixing operator which encompasses and extends both, creating more complex transitions that adapt to the texture's contents. Our mixing operator takes as input two or more textures along with two or more priority maps, which encode how the texture patterns should interact. The resulting mixed texture is defined pixel-wise by selecting the maximum of both priorities. We show that it integrates smoothly into two widespread applications: transition between two different textures, and texture synthesis that mixes pieces of the same texture. We provide constant-time and parallel evaluation of the resulting mix over square footprints of MIP-maps, making our operator suitable for real-time rendering. We also develop a micro-priority model, inspired by micro-geometry models in rendering, which represents sub-pixel priorities by a statistical distribution, and which allows for tuning between sharp cuts and smooth blend.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 6","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15193","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Surface Voxelization for Triangular Meshes with Equidistant Scanlines and Gap Detection 利用等距扫描线和间隙检测优化三角形网格的表面体素化
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-09-04 DOI: 10.1111/cgf.15195
S. Delgado Díez, C. Cerrada Somolinos, S. R. Gómez Palomo
{"title":"Optimizing Surface Voxelization for Triangular Meshes with Equidistant Scanlines and Gap Detection","authors":"S. Delgado Díez,&nbsp;C. Cerrada Somolinos,&nbsp;S. R. Gómez Palomo","doi":"10.1111/cgf.15195","DOIUrl":"10.1111/cgf.15195","url":null,"abstract":"<p>This paper presents an efficient algorithm for voxelizing the surface of triangular meshes in a single compute pass. The algorithm uses parallel equidistant lines to traverse the interior of triangles, minimizing costly memory operations and avoiding visiting the same voxels multiple times. By detecting and visiting only the voxels in each line operation, the proposed method achieves better performance results. This method incorporates a gap detection step, targeting areas where scanline-based voxelization methods might fail. By selectively addressing these gaps, our method attains superior performance outcomes. Additionally, the algorithm is written entirely in a single compute GLSL shader, which makes it highly portable and vendor independent. Its simplicity also makes it easy to adapt and extend for various applications. The paper compares the results of this algorithm with other modern methods, comprehensibly comparing the time performance and resources used. Additionally, we introduce a novel metric, the ‘Slope Consistency Value’, which quantifies triangle orientation's impact on voxelization accuracy for scanline-based approaches. The results show that the proposed solution outperforms existing, modern ones and obtains better results, especially in densely populated scenes with homogeneous triangle sizes and at higher resolutions.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 6","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15195","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ETBHD-HMF: A Hierarchical Multimodal Fusion Architecture for Enhanced Text-Based Hair Design ETBHD-HMF:用于增强基于文本的发型设计的分层多模态融合架构
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-09-03 DOI: 10.1111/cgf.15194
Rong He, Ge Jiao, Chen Li
{"title":"ETBHD-HMF: A Hierarchical Multimodal Fusion Architecture for Enhanced Text-Based Hair Design","authors":"Rong He,&nbsp;Ge Jiao,&nbsp;Chen Li","doi":"10.1111/cgf.15194","DOIUrl":"10.1111/cgf.15194","url":null,"abstract":"<p>Text-based hair design (TBHD) represents an innovative approach that utilizes text instructions for crafting hairstyle and colour, renowned for its flexibility and scalability. However, enhancing TBHD algorithms to improve generation quality and editing accuracy remains a current research difficulty. One important reason is that existing models fall short in alignment and fusion designs. Therefore, we propose a new layered multimodal fusion network called ETBHD-HMF, which decouples the input image and hair text information into layered hair colour and hairstyle representations. Within this network, the channel enhancement separation (CES) module is proposed to enhance important signals and suppress noise for text representation obtained from CLIP, thus improving generation quality. Based on this, we develop the weighted mapping fusion (WMF) sub-networks for hair colour and hairstyle. This sub-network applies the mapper operations to input image and text representations, acquiring joint information. The WMF then selectively merges image representation and joint information from various style layers using weighted operations, ultimately achieving fine-grained hairstyle designs. Additionally, to enhance editing accuracy and quality, we design a modality alignment loss to refine and optimize the information transmission and integration of the network. The experimental results of applying the network to the CelebA-HQ dataset demonstrate that our proposed model exhibits superior overall performance in terms of generation quality, visual realism, and editing accuracy. ETBHD-HMF (27.8 PSNR, 0.864 IDS) outperformed HairCLIP (26.9 PSNR, 0.828 IDS), with a 3% higher PSNR and a 4% higher IDS.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 6","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Directional Texture Editing for 3D Models 三维模型的定向纹理编辑
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-09-02 DOI: 10.1111/cgf.15196
Shengqi Liu, Zhuo Chen, Jingnan Gao, Yichao Yan, Wenhan Zhu, Jiangjing Lyu, Xiaokang Yang
{"title":"Directional Texture Editing for 3D Models","authors":"Shengqi Liu,&nbsp;Zhuo Chen,&nbsp;Jingnan Gao,&nbsp;Yichao Yan,&nbsp;Wenhan Zhu,&nbsp;Jiangjing Lyu,&nbsp;Xiaokang Yang","doi":"10.1111/cgf.15196","DOIUrl":"10.1111/cgf.15196","url":null,"abstract":"<p>Texture editing is a crucial task in 3D modelling that allows users to automatically manipulate the surface materials of 3D models. However, the inherent complexity of 3D models and the ambiguous text description lead to the challenge of this task. To tackle this challenge, we propose ITEM3D, a <b>T</b>exture <b>E</b>diting <b>M</b>odel designed for automatic <b>3D</b> object editing according to the text <b>I</b>nstructions. Leveraging the diffusion models and the differentiable rendering, ITEM3D takes the rendered images as the bridge between text and 3D representation and further optimizes the disentangled texture and environment map. Previous methods adopted the absolute editing direction, namely score distillation sampling (SDS) as the optimization objective, which unfortunately results in noisy appearances and text inconsistencies. To solve the problem caused by the ambiguous text, we introduce a relative editing direction, an optimization objective defined by the noise difference between the source and target texts, to release the semantic ambiguity between the texts and images. Additionally, we gradually adjust the direction during optimization to further address the unexpected deviation in the texture domain. Qualitative and quantitative experiments show that our ITEM3D outperforms the state-of-the-art methods on various 3D objects. We also perform text-guided relighting to show explicit control over lighting. Our project page: https://shengqiliu1.github.io/ITEM3D/.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 6","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Row–Column Separated Attention Based Low-Light Image/Video Enhancement 基于行列分离注意力的低照度图像/视频增强技术
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-08-29 DOI: 10.1111/cgf.15192
Chengqi Dong, Zhiyuan Cao, Tuoshi Qi, Kexin Wu, Yixing Gao, Fan Tang
{"title":"Row–Column Separated Attention Based Low-Light Image/Video Enhancement","authors":"Chengqi Dong,&nbsp;Zhiyuan Cao,&nbsp;Tuoshi Qi,&nbsp;Kexin Wu,&nbsp;Yixing Gao,&nbsp;Fan Tang","doi":"10.1111/cgf.15192","DOIUrl":"10.1111/cgf.15192","url":null,"abstract":"<p>U-Net structure is widely used for low-light image/video enhancement. The enhanced images result in areas with large local noise and loss of more details without proper guidance for global information. Attention mechanisms can better focus on and use global information. However, attention to images could significantly increase the number of parameters and computations. We propose a Row–Column Separated Attention module (RCSA) inserted after an improved U-Net. The RCSA module's input is the mean and maximum of the row and column of the feature map, which utilizes global information to guide local information with fewer parameters. We propose two temporal loss functions to apply the method to low-light video enhancement and maintain temporal consistency. Extensive experiments on the LOL, MIT Adobe FiveK image, and SDSD video datasets demonstrate the effectiveness of our approach.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 6","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Entropy-driven Progressive Compression of 3D Point Clouds 三维点云的熵驱动渐进压缩
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-08-22 DOI: 10.1111/cgf.15130
A. Zampieri, G. Delarue, N. Abou Bakr, P. Alliez
{"title":"Entropy-driven Progressive Compression of 3D Point Clouds","authors":"A. Zampieri,&nbsp;G. Delarue,&nbsp;N. Abou Bakr,&nbsp;P. Alliez","doi":"10.1111/cgf.15130","DOIUrl":"https://doi.org/10.1111/cgf.15130","url":null,"abstract":"<p>3D point clouds stand as one of the prevalent representations for 3D data, offering the advantage of closely aligning with sensing technologies and providing an unbiased representation of a measured physical scene. Progressive compression is required for real-world applications operating on networked infrastructures with restricted or variable bandwidth. We contribute a novel approach that leverages a recursive binary space partition, where the partitioning planes are not necessarily axis-aligned and optimized via an entropy criterion. The planes are encoded via a novel adaptive quantization method combined with prediction. The input 3D point cloud is encoded as an interlaced stream of partitioning planes and number of points in the cells of the partition. Compared to previous work, the added value is an improved rate-distortion performance, especially for very low bitrates. The latter are critical for interactive navigation of large 3D point clouds on heterogeneous networked infrastructures.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 5","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142041549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Front Matter 前言
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-08-22 DOI: 10.1111/cgf.15144
{"title":"Front Matter","authors":"","doi":"10.1111/cgf.15144","DOIUrl":"https://doi.org/10.1111/cgf.15144","url":null,"abstract":"&lt;p&gt;Massachusetts Institute of Technology, Cambridge, MA, USA&lt;/p&gt;&lt;p&gt;June 24 – 26, 2024&lt;/p&gt;&lt;p&gt;&lt;b&gt;Conference Co-Chairs&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Justin Solomon, MIT&lt;/p&gt;&lt;p&gt;Mina Konaković Luković, MIT&lt;/p&gt;&lt;p&gt;&lt;b&gt;Technical Program Co-Chairs&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Ruizhen Hu, Shenzhen University&lt;/p&gt;&lt;p&gt;Sylvain Lefebvre, INRIA&lt;/p&gt;&lt;p&gt;&lt;b&gt;Graduate School Co-Chairs&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Silvia Sellán, University of Toronto&lt;/p&gt;&lt;p&gt;Edward Chien, Boston University&lt;/p&gt;&lt;p&gt;&lt;b&gt;Steering Committee&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Leif Kobbelt, RWTH Aachen University, DE&lt;/p&gt;&lt;p&gt;Marc Alexa, Technische Universität Berlin, DE&lt;/p&gt;&lt;p&gt;Pierre Alliez, INRIA, FR&lt;/p&gt;&lt;p&gt;Mirela Ben-Chen, Technion-IIT, IL&lt;/p&gt;&lt;p&gt;Hui Huang, Shenzhen University, CN&lt;/p&gt;&lt;p&gt;Niloy Mitra, University College London, GB&lt;/p&gt;&lt;p&gt;Daniele Panozzo, New York University, US&lt;/p&gt;&lt;p&gt;&lt;b&gt;Alexa, Marc&lt;/b&gt;&lt;/p&gt;&lt;p&gt;TU Berlin, DE&lt;/p&gt;&lt;p&gt;&lt;b&gt;Alliez, Pierre&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Inria Sophia Antipolis, FR&lt;/p&gt;&lt;p&gt;&lt;b&gt;Bærentzen, Jakob Andreas&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Technical University of Denmark, DK&lt;/p&gt;&lt;p&gt;&lt;b&gt;Belyaev, Alexander&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Heriot-Watt University, GB&lt;/p&gt;&lt;p&gt;&lt;b&gt;Ben-Chen, Mirela&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Technion - Israel Institute of Technology, IL&lt;/p&gt;&lt;p&gt;&lt;b&gt;Benes, Bedrich&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Purdue University, US&lt;/p&gt;&lt;p&gt;&lt;b&gt;Bommes, David&lt;/b&gt;&lt;/p&gt;&lt;p&gt;University of Bern, CH&lt;/p&gt;&lt;p&gt;&lt;b&gt;Bonneel, Nicolas&lt;/b&gt;&lt;/p&gt;&lt;p&gt;CNRS, Université Lyon, FR&lt;/p&gt;&lt;p&gt;&lt;b&gt;Botsch, Mario&lt;/b&gt;&lt;/p&gt;&lt;p&gt;TU Dortmund, DE&lt;/p&gt;&lt;p&gt;&lt;b&gt;Boubekeur, Tamy&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Adobe Research, FR&lt;/p&gt;&lt;p&gt;&lt;b&gt;Campen, Marcel&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Osnabrück University, DE&lt;/p&gt;&lt;p&gt;&lt;b&gt;Chaine, Raphaelle&lt;/b&gt;&lt;/p&gt;&lt;p&gt;LIRIS CNRS, Université Lyon 1, FR&lt;/p&gt;&lt;p&gt;&lt;b&gt;Chen, Renjie&lt;/b&gt;&lt;/p&gt;&lt;p&gt;University of Science and Technology of China, CN&lt;/p&gt;&lt;p&gt;&lt;b&gt;Chen, Zhonggui&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Xiamen University, CN&lt;/p&gt;&lt;p&gt;&lt;b&gt;Chien, Edward&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Boston University, US&lt;/p&gt;&lt;p&gt;&lt;b&gt;Cignoni, Paolo&lt;/b&gt;&lt;/p&gt;&lt;p&gt;ISTI - CNR, IT&lt;/p&gt;&lt;p&gt;&lt;b&gt;Cohen-Steiner, David&lt;/b&gt;&lt;/p&gt;&lt;p&gt;INRIA, FR&lt;/p&gt;&lt;p&gt;&lt;b&gt;Desbrun, Mathieu&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Inria / Ecole Polytechnique, FR&lt;/p&gt;&lt;p&gt;&lt;b&gt;Dey, Tamal&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Purdue University, US&lt;/p&gt;&lt;p&gt;&lt;b&gt;Digne, Julie&lt;/b&gt;&lt;/p&gt;&lt;p&gt;LIRIS - CNRS, FR&lt;/p&gt;&lt;p&gt;&lt;b&gt;Fu, Xiao-Ming&lt;/b&gt;&lt;/p&gt;&lt;p&gt;USTC, CN&lt;/p&gt;&lt;p&gt;&lt;b&gt;Gao, Xifeng&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Tencent America, US&lt;/p&gt;&lt;p&gt;&lt;b&gt;Gingold, Yotam&lt;/b&gt;&lt;/p&gt;&lt;p&gt;George Mason University, US&lt;/p&gt;&lt;p&gt;&lt;b&gt;Giorgi, Daniela&lt;/b&gt;&lt;/p&gt;&lt;p&gt;National Research Council of Italy, IT&lt;/p&gt;&lt;p&gt;&lt;b&gt;Guerrero, Paul&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Adobe Research, US&lt;/p&gt;&lt;p&gt;&lt;b&gt;Herholz, Philipp&lt;/b&gt;&lt;/p&gt;&lt;p&gt;ETH Zurich, CH&lt;/p&gt;&lt;p&gt;&lt;b&gt;Hildebrandt, Klaus&lt;/b&gt;&lt;/p&gt;&lt;p&gt;TU Delft, NL&lt;/p&gt;&lt;p&gt;&lt;b&gt;Hoppe, Hugues&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Independent Researcher, US&lt;/p&gt;&lt;p&gt;&lt;b&gt;Hormann, Kai&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Università della Svizzera italiana, CH&lt;/p&gt;&lt;p&gt;&lt;b&gt;Huang, Jin&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Zhejiang University, CN&lt;/p&gt;&lt;p&gt;&lt;b&gt;Huang, Qixing&lt;/b&gt;&lt;/p&gt;&lt;p&gt;The University of Texas at Austin, US&lt;/p&gt;&lt;p&gt;&lt;b&gt;Jacobson, Alec&lt;/b&gt;&lt;/p&gt;&lt;p&gt;University of Toronto and Adobe Research, CA&lt;/p&gt;&lt;p&gt;&lt;b&gt;Ju, Tao&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Washington University in St. Louis, US&lt;/p&gt;&lt;p&gt;&lt;b&gt;Kazhdan, Misha&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Johns Hopkins University, US&lt;/p&gt;&lt;p&gt;&lt;b&gt;Keyser, John&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Texas A &amp; M University, US&lt;/p&gt;&lt;p&gt;&lt;b&gt;Kim, Vladimir&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Adobe, US&lt;/p&gt;&lt;p&gt;&lt;b&gt;Kobbelt, Leif&lt;/b&gt;&lt;/p&gt;&lt;p&gt;RWTH Aachen University, DE&lt;/p&gt;&lt;p&gt;&lt;b&gt;Kosinka, Jiri&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Bern","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 5","pages":"i-x"},"PeriodicalIF":2.7,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15144","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142041552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A High-Scalability Graph Modification System for Large-Scale Networks 适用于大规模网络的高可缩放性图形修改系统
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-08-16 DOI: 10.1111/cgf.15191
Shaobin Xu, Minghui Sun, Jun Qin
{"title":"A High-Scalability Graph Modification System for Large-Scale Networks","authors":"Shaobin Xu,&nbsp;Minghui Sun,&nbsp;Jun Qin","doi":"10.1111/cgf.15191","DOIUrl":"10.1111/cgf.15191","url":null,"abstract":"<p>Modifying network results is the most intuitive way to inject domain knowledge into network detection algorithms to improve their performance. While advances in computation scalability have made detecting large-scale networks possible, the human ability to modify such networks has not scaled accordingly, resulting in a huge ‘interaction gap’. Most existing works only support navigating and modifying edges one by one in a graph visualization, which causes a significant interaction burden when faced with large-scale networks. In this work, we propose a novel graph pattern mining algorithm based on the minimum description length (MDL) principle to partition and summarize multi-feature and isomorphic sub-graph matches. The mined sub-graph patterns can be utilized as mediums for modifying large-scale networks. Combining two traditional approaches, we introduce a new coarse-middle-fine graph modification paradigm (<i>i.e</i>. query graph-based modification <span></span><math></math> sub-graph pattern-based modification <span></span><math></math> raw edge-based modification). We further present a graph modification system that supports the graph modification paradigm for improving the scalability of modifying detected large-scale networks. We evaluate the performance of our graph pattern mining algorithm through an experimental study, demonstrate the usefulness of our system through a case study, and illustrate the efficiency of our graph modification paradigm through a user study.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 6","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SMFS-GAN: Style-Guided Multi-class Freehand Sketch-to-Image Synthesis SMFS-GAN:风格引导的多类自由草图到图像合成
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-08-07 DOI: 10.1111/cgf.15190
Zhenwei Cheng, Lei Wu, Xiang Li, Xiangxu Meng
{"title":"SMFS-GAN: Style-Guided Multi-class Freehand Sketch-to-Image Synthesis","authors":"Zhenwei Cheng,&nbsp;Lei Wu,&nbsp;Xiang Li,&nbsp;Xiangxu Meng","doi":"10.1111/cgf.15190","DOIUrl":"10.1111/cgf.15190","url":null,"abstract":"<p>Freehand sketch-to-image (S2I) is a challenging task due to the individualized lines and the random shape of freehand sketches. The multi-class freehand sketch-to-image synthesis task, in turn, presents new challenges for this research area. This task requires not only the consideration of the problems posed by freehand sketches but also the analysis of multi-class domain differences in the conditions of a single model. However, existing methods often have difficulty learning domain differences between multiple classes, and cannot generate controllable and appropriate textures while maintaining shape stability. In this paper, we propose a style-guided multi-class freehand sketch-to-image synthesis model, SMFS-GAN, which can be trained using only unpaired data. To this end, we introduce a contrast-based style encoder that optimizes the network's perception of domain disparities by explicitly modelling the differences between classes and thus extracting style information across domains. Further, to optimize the fine-grained texture of the generated results and the shape consistency with freehand sketches, we propose a local texture refinement discriminator and a Shape Constraint Module, respectively. In addition, to address the imbalance of data classes in the QMUL-Sketch dataset, we add 6K images by drawing manually and obtain QMUL-Sketch+ dataset. Extensive experiments on SketchyCOCO Object dataset, QMUL-Sketch+ dataset and Pseudosketches dataset demonstrate the effectiveness as well as the superiority of our proposed method.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 6","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141948496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anisotropy and Cross Fields 各向异性和交叉场
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-08-05 DOI: 10.1111/cgf.15132
L. Simons, N. Amenta
{"title":"Anisotropy and Cross Fields","authors":"L. Simons,&nbsp;N. Amenta","doi":"10.1111/cgf.15132","DOIUrl":"10.1111/cgf.15132","url":null,"abstract":"<p>We consider a cross field, possibly with singular points of valence 3 or 5, in which all streamlines are finite, and either end on the boundary or form cycles. We show that we can always assign lengths to the two cross field directions to produce an anisotropic orthogonal frame field. There is a one-dimensional family of such length functions, and we optimize within this family so that the two lengths are everywhere as similar as possible. This gives a numerical bound on the minimal anisotropy of any quad mesh exactly following the input cross field. We also show how to remove some limit cycles.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 5","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141948649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信