{"title":"在人格计算中实现通用和渐进式偏差缓解","authors":"Jian Jiang;Viswonathan Manoranjan;Hanan Salam;Oya Celiktutan","doi":"10.1109/TAFFC.2024.3409830","DOIUrl":null,"url":null,"abstract":"Building systems for predicting human socio-emotional states has promising applications; however, if trained on biased data, such systems could inadvertently yield biased decisions. Bias mitigation remains an open problem, which tackles the correction of a model's disparate performance over different groups defined by particular sensitive attributes (e.g., gender, age, and race). In this work, we design a novel fairness loss function named Multi-Group Parity (MGP) to provide a generalised approach for bias mitigation in personality computing. In contrast to existing works in the literature, MGP is generalised as it features four ‘multiple’ properties (4Mul): multiple tasks, multiple modalities, multiple sensitive attributes, and multi-valued attributes. Moreover, we explore how to incrementally mitigate the biases when more sensitive attributes are taken into consideration sequentially. Towards this problem, we introduce a novel algorithm that utilises an incremental learning framework to mitigate bias against one attribute data at a time without compromising past fairness. Extensive experiments on two large-scale multi-modal personality recognition datasets validate the effectiveness of our approach in achieving superior bias mitigation under the proposed four properties and incremental debiasing settings.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"15 4","pages":"2192-2203"},"PeriodicalIF":9.6000,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Towards Generalised and Incremental Bias Mitigation in Personality Computing\",\"authors\":\"Jian Jiang;Viswonathan Manoranjan;Hanan Salam;Oya Celiktutan\",\"doi\":\"10.1109/TAFFC.2024.3409830\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Building systems for predicting human socio-emotional states has promising applications; however, if trained on biased data, such systems could inadvertently yield biased decisions. Bias mitigation remains an open problem, which tackles the correction of a model's disparate performance over different groups defined by particular sensitive attributes (e.g., gender, age, and race). In this work, we design a novel fairness loss function named Multi-Group Parity (MGP) to provide a generalised approach for bias mitigation in personality computing. In contrast to existing works in the literature, MGP is generalised as it features four ‘multiple’ properties (4Mul): multiple tasks, multiple modalities, multiple sensitive attributes, and multi-valued attributes. Moreover, we explore how to incrementally mitigate the biases when more sensitive attributes are taken into consideration sequentially. Towards this problem, we introduce a novel algorithm that utilises an incremental learning framework to mitigate bias against one attribute data at a time without compromising past fairness. Extensive experiments on two large-scale multi-modal personality recognition datasets validate the effectiveness of our approach in achieving superior bias mitigation under the proposed four properties and incremental debiasing settings.\",\"PeriodicalId\":13131,\"journal\":{\"name\":\"IEEE Transactions on Affective Computing\",\"volume\":\"15 4\",\"pages\":\"2192-2203\"},\"PeriodicalIF\":9.6000,\"publicationDate\":\"2024-06-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Affective Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10549797/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Affective Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10549797/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Towards Generalised and Incremental Bias Mitigation in Personality Computing
Building systems for predicting human socio-emotional states has promising applications; however, if trained on biased data, such systems could inadvertently yield biased decisions. Bias mitigation remains an open problem, which tackles the correction of a model's disparate performance over different groups defined by particular sensitive attributes (e.g., gender, age, and race). In this work, we design a novel fairness loss function named Multi-Group Parity (MGP) to provide a generalised approach for bias mitigation in personality computing. In contrast to existing works in the literature, MGP is generalised as it features four ‘multiple’ properties (4Mul): multiple tasks, multiple modalities, multiple sensitive attributes, and multi-valued attributes. Moreover, we explore how to incrementally mitigate the biases when more sensitive attributes are taken into consideration sequentially. Towards this problem, we introduce a novel algorithm that utilises an incremental learning framework to mitigate bias against one attribute data at a time without compromising past fairness. Extensive experiments on two large-scale multi-modal personality recognition datasets validate the effectiveness of our approach in achieving superior bias mitigation under the proposed four properties and incremental debiasing settings.
期刊介绍:
The IEEE Transactions on Affective Computing is an international and interdisciplinary journal. Its primary goal is to share research findings on the development of systems capable of recognizing, interpreting, and simulating human emotions and related affective phenomena. The journal publishes original research on the underlying principles and theories that explain how and why affective factors shape human-technology interactions. It also focuses on how techniques for sensing and simulating affect can enhance our understanding of human emotions and processes. Additionally, the journal explores the design, implementation, and evaluation of systems that prioritize the consideration of affect in their usability. We also welcome surveys of existing work that provide new perspectives on the historical and future directions of this field.