{"title":"3DGCQA:三维人工智能生成内容质量评估数据库","authors":"Yingjie Zhou, Zicheng Zhang, Farong Wen, Jun Jia, Yanwei Jiang, Xiaohong Liu, Xiongkuo Min, Guangtao Zhai","doi":"arxiv-2409.07236","DOIUrl":null,"url":null,"abstract":"Although 3D generated content (3DGC) offers advantages in reducing production\ncosts and accelerating design timelines, its quality often falls short when\ncompared to 3D professionally generated content. Common quality issues\nfrequently affect 3DGC, highlighting the importance of timely and effective\nquality assessment. Such evaluations not only ensure a higher standard of 3DGCs\nfor end-users but also provide critical insights for advancing generative\ntechnologies. To address existing gaps in this domain, this paper introduces a\nnovel 3DGC quality assessment dataset, 3DGCQA, built using 7 representative\nText-to-3D generation methods. During the dataset's construction, 50 fixed\nprompts are utilized to generate contents across all methods, resulting in the\ncreation of 313 textured meshes that constitute the 3DGCQA dataset. The\nvisualization intuitively reveals the presence of 6 common distortion\ncategories in the generated 3DGCs. To further explore the quality of the 3DGCs,\nsubjective quality assessment is conducted by evaluators, whose ratings reveal\nsignificant variation in quality across different generation methods.\nAdditionally, several objective quality assessment algorithms are tested on the\n3DGCQA dataset. The results expose limitations in the performance of existing\nalgorithms and underscore the need for developing more specialized quality\nassessment methods. To provide a valuable resource for future research and\ndevelopment in 3D content generation and quality assessment, the dataset has\nbeen open-sourced in https://github.com/zyj-2000/3DGCQA.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"26 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"3DGCQA: A Quality Assessment Database for 3D AI-Generated Contents\",\"authors\":\"Yingjie Zhou, Zicheng Zhang, Farong Wen, Jun Jia, Yanwei Jiang, Xiaohong Liu, Xiongkuo Min, Guangtao Zhai\",\"doi\":\"arxiv-2409.07236\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Although 3D generated content (3DGC) offers advantages in reducing production\\ncosts and accelerating design timelines, its quality often falls short when\\ncompared to 3D professionally generated content. Common quality issues\\nfrequently affect 3DGC, highlighting the importance of timely and effective\\nquality assessment. Such evaluations not only ensure a higher standard of 3DGCs\\nfor end-users but also provide critical insights for advancing generative\\ntechnologies. To address existing gaps in this domain, this paper introduces a\\nnovel 3DGC quality assessment dataset, 3DGCQA, built using 7 representative\\nText-to-3D generation methods. During the dataset's construction, 50 fixed\\nprompts are utilized to generate contents across all methods, resulting in the\\ncreation of 313 textured meshes that constitute the 3DGCQA dataset. The\\nvisualization intuitively reveals the presence of 6 common distortion\\ncategories in the generated 3DGCs. To further explore the quality of the 3DGCs,\\nsubjective quality assessment is conducted by evaluators, whose ratings reveal\\nsignificant variation in quality across different generation methods.\\nAdditionally, several objective quality assessment algorithms are tested on the\\n3DGCQA dataset. The results expose limitations in the performance of existing\\nalgorithms and underscore the need for developing more specialized quality\\nassessment methods. To provide a valuable resource for future research and\\ndevelopment in 3D content generation and quality assessment, the dataset has\\nbeen open-sourced in https://github.com/zyj-2000/3DGCQA.\",\"PeriodicalId\":501289,\"journal\":{\"name\":\"arXiv - EE - Image and Video Processing\",\"volume\":\"26 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Image and Video Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07236\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Image and Video Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07236","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
3DGCQA: A Quality Assessment Database for 3D AI-Generated Contents
Although 3D generated content (3DGC) offers advantages in reducing production
costs and accelerating design timelines, its quality often falls short when
compared to 3D professionally generated content. Common quality issues
frequently affect 3DGC, highlighting the importance of timely and effective
quality assessment. Such evaluations not only ensure a higher standard of 3DGCs
for end-users but also provide critical insights for advancing generative
technologies. To address existing gaps in this domain, this paper introduces a
novel 3DGC quality assessment dataset, 3DGCQA, built using 7 representative
Text-to-3D generation methods. During the dataset's construction, 50 fixed
prompts are utilized to generate contents across all methods, resulting in the
creation of 313 textured meshes that constitute the 3DGCQA dataset. The
visualization intuitively reveals the presence of 6 common distortion
categories in the generated 3DGCs. To further explore the quality of the 3DGCs,
subjective quality assessment is conducted by evaluators, whose ratings reveal
significant variation in quality across different generation methods.
Additionally, several objective quality assessment algorithms are tested on the
3DGCQA dataset. The results expose limitations in the performance of existing
algorithms and underscore the need for developing more specialized quality
assessment methods. To provide a valuable resource for future research and
development in 3D content generation and quality assessment, the dataset has
been open-sourced in https://github.com/zyj-2000/3DGCQA.