{"title":"AlignTex:从多视图图稿生成像素精确的纹理","authors":"Yuqing Zhang, Hao Xu, Yiqian Wu, Sirui Chen, Sirui Lin, Xiang Li, Xifeng Gao, Xiaogang Jin","doi":"10.1145/3731158","DOIUrl":null,"url":null,"abstract":"Current 3D asset creation pipelines typically consist of three stages: creating multi-view concept art, producing 3D meshes based on the artwork, and painting textures for the meshes—an often labor-intensive process. Automated texture generation offers significant acceleration, but prior methods, which fine-tune 2D diffusion models with multi-view input images, often fail to preserve pixel-level details. These methods primarily emphasize semantic and subject consistency, which do not meet the requirements of artwork-guided texture workflows. To address this, we present AlignTex , a novel framework for generating high-quality textures from 3D meshes and multi-view artwork, ensuring both appearance detail and geometric consistency. AlignTex operates in two stages: aligned image generation and texture refinement. The core of our approach, AlignNet , resolves complex misalignments by extracting information from both the artwork and the mesh, generating images compatible with orthographic projection while maintaining geometric and visual fidelity. After projecting aligned images into the texture space, further refinement addresses seams and self-occlusion using an inpainting model and a geometry-aware texture dilation method. Experimental results demonstrate that AlignTex outperforms baseline methods in generation quality and efficiency, offering a practical solution to enhance 3D asset creation in gaming and film production.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"26 1","pages":""},"PeriodicalIF":9.5000,"publicationDate":"2025-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AlignTex: Pixel-Precise Texture Generation from Multi-view Artwork\",\"authors\":\"Yuqing Zhang, Hao Xu, Yiqian Wu, Sirui Chen, Sirui Lin, Xiang Li, Xifeng Gao, Xiaogang Jin\",\"doi\":\"10.1145/3731158\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Current 3D asset creation pipelines typically consist of three stages: creating multi-view concept art, producing 3D meshes based on the artwork, and painting textures for the meshes—an often labor-intensive process. Automated texture generation offers significant acceleration, but prior methods, which fine-tune 2D diffusion models with multi-view input images, often fail to preserve pixel-level details. These methods primarily emphasize semantic and subject consistency, which do not meet the requirements of artwork-guided texture workflows. To address this, we present AlignTex , a novel framework for generating high-quality textures from 3D meshes and multi-view artwork, ensuring both appearance detail and geometric consistency. AlignTex operates in two stages: aligned image generation and texture refinement. The core of our approach, AlignNet , resolves complex misalignments by extracting information from both the artwork and the mesh, generating images compatible with orthographic projection while maintaining geometric and visual fidelity. After projecting aligned images into the texture space, further refinement addresses seams and self-occlusion using an inpainting model and a geometry-aware texture dilation method. Experimental results demonstrate that AlignTex outperforms baseline methods in generation quality and efficiency, offering a practical solution to enhance 3D asset creation in gaming and film production.\",\"PeriodicalId\":50913,\"journal\":{\"name\":\"ACM Transactions on Graphics\",\"volume\":\"26 1\",\"pages\":\"\"},\"PeriodicalIF\":9.5000,\"publicationDate\":\"2025-07-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Graphics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1145/3731158\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Graphics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3731158","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
AlignTex: Pixel-Precise Texture Generation from Multi-view Artwork
Current 3D asset creation pipelines typically consist of three stages: creating multi-view concept art, producing 3D meshes based on the artwork, and painting textures for the meshes—an often labor-intensive process. Automated texture generation offers significant acceleration, but prior methods, which fine-tune 2D diffusion models with multi-view input images, often fail to preserve pixel-level details. These methods primarily emphasize semantic and subject consistency, which do not meet the requirements of artwork-guided texture workflows. To address this, we present AlignTex , a novel framework for generating high-quality textures from 3D meshes and multi-view artwork, ensuring both appearance detail and geometric consistency. AlignTex operates in two stages: aligned image generation and texture refinement. The core of our approach, AlignNet , resolves complex misalignments by extracting information from both the artwork and the mesh, generating images compatible with orthographic projection while maintaining geometric and visual fidelity. After projecting aligned images into the texture space, further refinement addresses seams and self-occlusion using an inpainting model and a geometry-aware texture dilation method. Experimental results demonstrate that AlignTex outperforms baseline methods in generation quality and efficiency, offering a practical solution to enhance 3D asset creation in gaming and film production.
期刊介绍:
ACM Transactions on Graphics (TOG) is a peer-reviewed scientific journal that aims to disseminate the latest findings of note in the field of computer graphics. It has been published since 1982 by the Association for Computing Machinery. Starting in 2003, all papers accepted for presentation at the annual SIGGRAPH conference are printed in a special summer issue of the journal.