Patrik Joslin Kenfack, Daniil Dmitrievich Arapovy, Rasheed Hussain, S. Kazmi, A. Khan
{"title":"论生成对抗网络(GANs)的公平性","authors":"Patrik Joslin Kenfack, Daniil Dmitrievich Arapovy, Rasheed Hussain, S. Kazmi, A. Khan","doi":"10.1109/NIR52917.2021.9666131","DOIUrl":null,"url":null,"abstract":"Generative adversarial networks (GANs) are one of the greatest advances in AI in recent years. With their ability to directly learn the probability distribution of data and then sample synthetic realistic data. Many applications have emerged, using GANs to solve classical problems in machine learning, such as data augmentation, class imbalance problems, and fair representation learning. In this paper, we analyze and highlight the fairness concerns of GANs. In this regard, we show empirically that GANs models may inherently prefer certain groups during the training process and therefore they’re not able to homogeneously generate data from different groups during the testing phase. Furthermore, we propose solutions to solve this issue by conditioning the GAN model towards samples’ groups or using the ensemble method (boosting) to allow the GAN model to leverage distributed structure of data during the training phase and generate groups at an equal rate during the testing phase.","PeriodicalId":333109,"journal":{"name":"2021 International Conference \"Nonlinearity, Information and Robotics\" (NIR)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"On the Fairness of Generative Adversarial Networks (GANs)\",\"authors\":\"Patrik Joslin Kenfack, Daniil Dmitrievich Arapovy, Rasheed Hussain, S. Kazmi, A. Khan\",\"doi\":\"10.1109/NIR52917.2021.9666131\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Generative adversarial networks (GANs) are one of the greatest advances in AI in recent years. With their ability to directly learn the probability distribution of data and then sample synthetic realistic data. Many applications have emerged, using GANs to solve classical problems in machine learning, such as data augmentation, class imbalance problems, and fair representation learning. In this paper, we analyze and highlight the fairness concerns of GANs. In this regard, we show empirically that GANs models may inherently prefer certain groups during the training process and therefore they’re not able to homogeneously generate data from different groups during the testing phase. Furthermore, we propose solutions to solve this issue by conditioning the GAN model towards samples’ groups or using the ensemble method (boosting) to allow the GAN model to leverage distributed structure of data during the training phase and generate groups at an equal rate during the testing phase.\",\"PeriodicalId\":333109,\"journal\":{\"name\":\"2021 International Conference \\\"Nonlinearity, Information and Robotics\\\" (NIR)\",\"volume\":\"50 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference \\\"Nonlinearity, Information and Robotics\\\" (NIR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NIR52917.2021.9666131\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference \"Nonlinearity, Information and Robotics\" (NIR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NIR52917.2021.9666131","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
On the Fairness of Generative Adversarial Networks (GANs)
Generative adversarial networks (GANs) are one of the greatest advances in AI in recent years. With their ability to directly learn the probability distribution of data and then sample synthetic realistic data. Many applications have emerged, using GANs to solve classical problems in machine learning, such as data augmentation, class imbalance problems, and fair representation learning. In this paper, we analyze and highlight the fairness concerns of GANs. In this regard, we show empirically that GANs models may inherently prefer certain groups during the training process and therefore they’re not able to homogeneously generate data from different groups during the testing phase. Furthermore, we propose solutions to solve this issue by conditioning the GAN model towards samples’ groups or using the ensemble method (boosting) to allow the GAN model to leverage distributed structure of data during the training phase and generate groups at an equal rate during the testing phase.