{"title":"无偏差分:基于扩散模型的人脸图像生成中的偏差分析与缓解","authors":"Malsha V. Perera;Vishal M. Patel","doi":"10.1109/TBIOM.2024.3525037","DOIUrl":null,"url":null,"abstract":"Diffusion-based generative models have become increasingly popular in applications such as synthetic data generation and image editing, due to their ability to generate realistic, high-quality images. However, these models can exacerbate existing social biases, particularly regarding attributes like gender and race, potentially impacting downstream applications. In this paper, we analyze the presence of social biases in diffusion-based face generations and propose a novel sampling process guidance algorithm to mitigate these biases. Specifically, during the diffusion sampling process, we guide the generation to produce samples with attribute distributions that align with a balanced or desired attribute distribution. Our experiments demonstrate that diffusion models exhibit biases across multiple datasets in terms of gender and race. Moreover, our proposed method effectively mitigates these biases, making diffusion-based face generation more fair and inclusive.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 3","pages":"384-395"},"PeriodicalIF":5.0000,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Unbiased-Diff: Analyzing and Mitigating Biases in Diffusion Model-Based Face Image Generation\",\"authors\":\"Malsha V. Perera;Vishal M. Patel\",\"doi\":\"10.1109/TBIOM.2024.3525037\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Diffusion-based generative models have become increasingly popular in applications such as synthetic data generation and image editing, due to their ability to generate realistic, high-quality images. However, these models can exacerbate existing social biases, particularly regarding attributes like gender and race, potentially impacting downstream applications. In this paper, we analyze the presence of social biases in diffusion-based face generations and propose a novel sampling process guidance algorithm to mitigate these biases. Specifically, during the diffusion sampling process, we guide the generation to produce samples with attribute distributions that align with a balanced or desired attribute distribution. Our experiments demonstrate that diffusion models exhibit biases across multiple datasets in terms of gender and race. Moreover, our proposed method effectively mitigates these biases, making diffusion-based face generation more fair and inclusive.\",\"PeriodicalId\":73307,\"journal\":{\"name\":\"IEEE transactions on biometrics, behavior, and identity science\",\"volume\":\"7 3\",\"pages\":\"384-395\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2025-01-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on biometrics, behavior, and identity science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10820122/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on biometrics, behavior, and identity science","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10820122/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Unbiased-Diff: Analyzing and Mitigating Biases in Diffusion Model-Based Face Image Generation
Diffusion-based generative models have become increasingly popular in applications such as synthetic data generation and image editing, due to their ability to generate realistic, high-quality images. However, these models can exacerbate existing social biases, particularly regarding attributes like gender and race, potentially impacting downstream applications. In this paper, we analyze the presence of social biases in diffusion-based face generations and propose a novel sampling process guidance algorithm to mitigate these biases. Specifically, during the diffusion sampling process, we guide the generation to produce samples with attribute distributions that align with a balanced or desired attribute distribution. Our experiments demonstrate that diffusion models exhibit biases across multiple datasets in terms of gender and race. Moreover, our proposed method effectively mitigates these biases, making diffusion-based face generation more fair and inclusive.