{"title":"Finding the Subjective Truth: Collecting 2 Million Votes for Comprehensive Gen-AI Model Evaluation","authors":"Dimitrios Christodoulou, Mads Kuhlmann-Jørgensen","doi":"arxiv-2409.11904","DOIUrl":null,"url":null,"abstract":"Efficiently evaluating the performance of text-to-image models is difficult\nas it inherently requires subjective judgment and human preference, making it\nhard to compare different models and quantify the state of the art. Leveraging\nRapidata's technology, we present an efficient annotation framework that\nsources human feedback from a diverse, global pool of annotators. Our study\ncollected over 2 million annotations across 4,512 images, evaluating four\nprominent models (DALL-E 3, Flux.1, MidJourney, and Stable Diffusion) on style\npreference, coherence, and text-to-image alignment. We demonstrate that our\napproach makes it feasible to comprehensively rank image generation models\nbased on a vast pool of annotators and show that the diverse annotator\ndemographics reflect the world population, significantly decreasing the risk of\nbiases.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11904","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Efficiently evaluating the performance of text-to-image models is difficult
as it inherently requires subjective judgment and human preference, making it
hard to compare different models and quantify the state of the art. Leveraging
Rapidata's technology, we present an efficient annotation framework that
sources human feedback from a diverse, global pool of annotators. Our study
collected over 2 million annotations across 4,512 images, evaluating four
prominent models (DALL-E 3, Flux.1, MidJourney, and Stable Diffusion) on style
preference, coherence, and text-to-image alignment. We demonstrate that our
approach makes it feasible to comprehensively rank image generation models
based on a vast pool of annotators and show that the diverse annotator
demographics reflect the world population, significantly decreasing the risk of
biases.